Posts

The Tech Industry Worries CISA Will Allow Other Companies to Damage Their Infrastructure

Screen Shot 2015-10-16 at 10.01.41 AMThe Computer and Communications Industry Association — a trade organization that represents Internet, social media, and even some telecom companies — came out yesterday against the Cyber Intelligence Sharing Act, an information sharing bill that not only wouldn’t be very useful in protecting against hacking, but might have really dangerous unintended consequences, such as gutting regulatory authority over network security negligence (though the Chamber of Commerce, this bill’s biggest backer, may not consider it an unintended consequence).

Most coverage of this decision emphasizes CCIA’s concern about the bill’s danger to privacy.

CISA’s prescribed mechanism for sharing of cyber threat information does not sufficiently protect users’ privacy or appropriately limit the permissible uses of information shared with the government.

But I’m far more interested in CCIA’s stated concern that the bill, in authorizing defensive measures, would permit actions that would damage the Internet’s infrastructure (to which a number of these companies contribute).

In addition, the bill authorizes entities to employ network defense measures that might cause collateral harm to the systems of innocent third parties.

[snip]

But such a system … must not enable activities that might actively destabilize the infrastructure the bill aims to protect.

At least some of these companies that make up our Internet ecosystem think that some other companies, in aggressively pursuing perceived intruders to their systems, will do real damage to Internet as a whole.

It seems like a worthy concern. And yet the Senate runs headlong towards passing this bill anyway.

How Does Duty to Warn Extend to Cyberattacks?

Steve Aftergood has posted a new directive from James Clapper mandating that Intelligence Community members warn individuals (be they corporate or natural persons) of a threat of death of seriously bodily harm.

This Directive establishes in policy a consistent, coordinated approach for how the Intelligence Community (IC) will provide warning regarding threats to specific individuals or groups of intentional killing, serious bodily injury, and kidnapping.

The fine print on it is quite interesting. For example, if you’re a drug dealer, someone involved in violent crime, or you’re at risk solely because you’re involved in an insurgency, the IC is not obliged to give you notice. Remember, the FBI did not alert members of Occupy Wall Street someone was plotting to assassinate them. Did they (then) not do so because they considered Occupy an “insurgency”? Would they consider them as one going forward?

But I’m most interested in what this should mean for hacking.

Here’s how the directive defines “seriously bodily harm.”

Serious Bodily Injury means an injury which creates a substantial risk of death or which causes serious, permanent disfigurement or impairment.

As I have noted, NSA has secretly defined “serious bodily harm” to include threat to property — that is, threats to property constitute threats of bodily harm.

If so, a serious hack would represent a threat of bodily harm (and under NSA’s minimization procedures they could share this data). While much of the rest of the Directive talks about how to accomplish this bureaucratically (and the sources and methods excuses for not giving notice), this should suggest that if a company like Sony is at risk of a major hack, NSA would have to tell it (and the Directive states that the obligation applies for US persons and non-US persons, though Sony is in this context a US person).

So shouldn’t this amount to a mandate for cybersharing, all without the legal immunity offered corporations under CISA?

 

This Surveillance Bill Brought to You by the US Chamber of Commerce — To Stave Off Something More Effective

Screen Shot 2015-08-11 at 10.45.57 AMThe Chamber of Commerce has a blog post pitching CISA today.

It’s mostly full of lies (see OTI’s @Robyn_Greene‘s timeline for an explication of those lies).

But given Sheldon Whitehouse’s admission the other day that the Chamber exercises pre-clearance vetoes over this bill, I’d like to consider what the Chamber gets out of CISA. It claims the bill, ” would help businesses achieve timely and actionable situational awareness to improve theirs and the nation’s detection, mitigation, and response capabilities against cyber threats.” At least according to the Chamber, this is about keeping businesses safe. Perhaps it pitches the bill in those terms because of its audience, other businesses. But given the gross asymmetry of the bill — where actual humans can be policed based on data turned over, but corporate people cannot be — I’m not so sure.

Screen Shot 2015-08-11 at 10.46.57 AMAnd therein lies the key.

Particularly given increasing calls for effective cybersecurity legislation — something with actual teeth — at least for cars and critical infrastructure, this bill should be considered a corporatist effort to stave off more effective measures that would have a greater impact on cybersecurity.

That is borne out by the Chamber’s recent 5 reasons to support CISA post. It emphasizes two things that have nothing to do with efficacy: the voluntary nature of it, and the immunity, secrecy, and anti-trust provisions in the bill.

That is, the Chamber, which increasingly seems to be the biggest cheerleader for this bill, isn’t aiming to anything more than “situational awareness” to combat the scourge of hacking. But it wants that — the entire point of this bill — within a context that provides corporations with maximal flexibility while giving them protection they have to do nothing to earn.

CISA is about immunizing corporations to spy on their customers. That’s neither necessary nor the most effective means to combat hacking. Which ought to raise serious questions about the Chamber’s commitment to keeping America safe.

 

The US Chamber of Commerce Is Pre-Clearing What It Is Willing to Do for Our National Security on CISA

Screen Shot 2015-08-04 at 4.11.21 PMSheldon Whitehouse just attempted (after 1:44) to rebut an epic rant from John McCain (at 1:14) in which the Arizona Senator suggested anyone who wanted to amend the flawed Cyber Intelligence Sharing Act wasn’t serious about national security.

Whitehouse defended his two amendments first by pointing out that McCain likes and respects the national security credentials of both his co-sponsors (Lindsey Graham and Max Blunt).

Then Whitehouse said,  “I believe both of the bills [sic] have now been cleared by the US Chamber of Commerce, so they don’t have a business community objection.”

Perhaps John McCain would be better served turning himself purple (really! watch his rant!) attacking the very notion that the Chamber of Commerce gets pre-veto power over a bill that (according to John McCain) is utterly vital for national security.

Even better, maybe John McCain could turn himself purple suggesting that the Chamber needs to step up to the plate and accept real responsibility for making this country’s networks safer, rather than just using our cybersecurity problems as an opportunity to demand immunity for yet more business conduct.

If this thing is vital for national security — this particular bill is not, but McCain turned himself awfully purple — then the Chamber should just suck it up and meet the requirements to protect the country decided on by the elected representatives of this country.

Yet instead, the Chamber apparently gets to pre-clear a bill designed to spy on the Chamber’s customers.

Department of Energy: CyberSprinting Backwards

Earlier this week, I noted that of the seven agencies that would automatically get cybersecurity data shared under the Cyber Information Sharing Act, several had similar or even worse cyberpreparedness than the Office of Personnel Management, from which China stole entire databases of information on our cleared personnel.

To make that argument, I used data from the FISMA report released in February. Since then — or rather, since the revelation of the OPM hack — the Administration has been pushing a “30 day sprint” to try to close the gaping holes in our security.

Yesterday, the government’s Chief Information Officer, Tony Scott, released a blog post and the actual results, bragging about significant improvement.

And there have been significant results (though note, the 30 day sprint turned into a 60 day middle distance run), particularly from OPM, Interior (which hosted OPM’s databases), and — two of those CISA data sharing agencies — DHS and Treasury.

Screen Shot 2015-08-01 at 9.19.01 AM

 

Whoa! Check out that spike! Congratulations to those who worked hard to make this improvement.

But when you look at the underlying data, things aren’t so rosy.

Screen Shot 2015-08-01 at 9.10.51 AM

 

We are apparently supposed to be thrilled that DOD now requires strong authentication for 58% of its privileged users (people like Edward Snowden), up 20% from the earlier 38%. Far more of DOD’s unprivileged users (people like Chelsea Manning?) — 83% — are required to use strong authentication, but that number declined from a previous 88%.

More remarkable, however, is that during a 30 day 60 day sprint to plug major holes, the Department of Energy also backslid, with strong authentication going from 34% to 11%. Admittedly, more of DoE’s privileged users must use strong authentication, but only 13% total.

DOJ (at least FBI and probably through them other parts of DOJ will receive this CISA information), too, backslid overall, though with a huge improvement for privileged users. And Commerce (another CISA recipient agency) also had a small regression for privileged users.

There may be explanations for this, such as that someone is being moved from a less effective two-factor program to a better one.

But it does trouble me that an agency as central to our national security as Department of Energy is regressing even during a period of concerted focus.

Under CISA, Data Would Automatically Get Shared with Agencies with Worse Cyberpreparedness than OPM

Report CardIn the wake of the OPM hack, Congress is preparing to do something!!!  Unfortunately, that “something” will be to pass the Cyber Information Sharing Act, which not only wouldn’t have helped prevent the OPM hack, but comes with its own problems.

To understand why it is such a bad idea to pass CISA just to appear to be doing something in response to OPM, compare this table from this year’s Federal Information Security Management report with the list of agencies that will automatically get the data turned over to the Federal government if CISA passes.

(A) The Department of Commerce.

(B) The Department of Defense.

(C) The Department of Energy.

(D) The Department of Homeland Security.

(E) The Department of Justice.

(F) The Department of the Treasury.

(G) The Office of the Director of National Intelligence.

So not only will information automatically go to DOJ, DHS, and DOD — all of which fulfill the information security measures reviewed by Office of Management and Budget — but it would also go to Department of Energy, which scores just a few points better than OPM, Department of Commerce, which was improving but lost some IT people and so couldn’t be graded last year, and Department of Treasury, which scores worse than OPM.

Which is just one of the reasons why CISA is a stupid idea.

Some folks have put together this really cool tool that will help you fax the Senate (a tool they might understand) so you can explain how dumb passing CISA would be. Try it!

 

Sheldon Whitehouse’s Hot and Cold Corporate Cybersecurity Liability

Ben Wittes has a summary of last Wednesday’s “Going Dark” hearings. He engages in a really amusing straw man — comparing a hypothetically perfectly secure Internet with ungoverned Somalia.

Consider the conceptual question first. Would it be a good idea to have a world-wide communications infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could snap our fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from the FSB but also from the FBI even wielding lawful process, would that be desireable? Or, in the alternative, do we want to create an internet as secure as possible from everyone except government investigators exercising their legal authorities with the understanding that other countries may do the same?

Conceptually speaking, I am with Comey on this question—and the matter does not seem to me an especially close call. The belief in principle in creating a giant world-wide network on which surveillance is technically impossible is really an argument for the creation of the world’s largest ungoverned space. I understand why techno-anarchists find this idea so appealing. I can’t imagine for moment, however, why anyone else would.

Consider the comparable argument in physical space: the creation of a city in which authorities are entirely dependent on citizen reporting of bad conduct but have no direct visibility onto what happens on the streets and no ability to conduct search warrants (even with court orders) or to patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really suck is not controversial when you’re talking about Yemen or Somalia. I see nothing more attractive about the creation of a worldwide architecture in which it is technically impossible to intercept and read ISIS communications with followers or to follow child predators into chatrooms where they go after kids.

This gets the issue precisely backwards, attributing all possible security and governance to policing alone, and none to prevention, and as a result envisioning chaos in a possibility that would, in fact, have less or at least different kinds chaos. Wittes simply dismisses the benefits of a perfectly secure Internet (which is what all the pro-backdoor witnesses at the hearings did too, ignoring, for example, the effect that encrypting phones would have on a really terrible iPhone theft problem). But Wittes’ straw man isn’t central to his argument, just a tell about his biases.

Wittes, like Comey, also suggests the technologists are wrong when they say back doors will be bad.

There is some reason, in my view, to suspect that the picture may not be quite as stark as the computer scientists make it seem. After all, the big tech companies increase the complexity of their software products all the time, and they generally regard the increased attack surface of the software they create as a result as a mitigatable problem. Similarly, there are lots of high-value intelligence targets that we have to secure and would have big security implications if we could not do so successfully. And when it really counts, that task is not hopeless. Google and Apple and Facebook are not without tools in the cybersecurity department.

Wittes appears unaware that the US has failed miserably at securing its high value intelligence targets, so it’s not a great counterexample.

But I’m primarily interested in Wittes’ fondness for an idea floated by Sheldon Whitehouse: that the government force providers to better weigh the risk of security by ensuring it bears liability if the cops can’t access communications.

Another, perhaps softer, possibility is to rely on the possibility of civil liability to incentivize companies to focus on these issues. At the Senate Judiciary Committee hearing this past week, the always interesting Senator Sheldon Whitehouse posed a question to Deputy Attorney General Sally Yates about which I’ve been thinking as well: “A girl goes missing. A neighbor reports that they saw her being taken into a van out in front of the house. The police are called. They come to the home. The parents are frantic. The girl’s phone is still at home.” The phone, however, is encrypted:

WHITEHOUSE: It strikes me that one of the balances that we have in these circumstances where a company may wish to privatize value by saying, “Gosh, we’re secure now. We got a really good product. You’re going to love it.” That’s to their benefit. But for the family of the girl that disappeared in the van, that’s a pretty big cost. And when we see corporations privatizing value and socializing cost so that other people have to bear the cost, one of the ways that we get back to that and try to put some balance into it, is through the civil courts, through a liability system.

If you’re a polluter and you’re dumping poisonous waste into the water rather than treating it properly, somebody downstream can bring an action and can get damages for the harm that they sustain, can get an order telling you to knock it off. I’d be interested in whether or not the Department of Justice has done any analysis as to what role the civil-liability system might be playing now to support these companies in drawing the correct balance, or if they’ve immunized themselves from the cost entirely and are enjoying the benefits. I think in terms of our determination as to what, if anything, we should do, knowing where the Department of Justice believes the civil liability system leaves us might be a helpful piece of information. So I don’t know if you’ve undertaken that, but if you have, I’d appreciate it if you’d share that with us, and if you’d consider doing it, I think that might be helpful to us.

YATES: We would be glad to look at that. It’s not something that we have done any kind of detailed analysis. We’ve been working hard on trying to figure out what the solution on the front end might be so that we’re not in a situation where there could potentially be corporate liability or the inability to be able to access the device.

WHITEHOUSE: But in terms of just looking at this situation, does it not appear that it looks like a situation where value is being privatized and costs are being socialized onto the rest of us?

YATES: That’s certainly one way to look at it. And perhaps the companies have done greater analysis on that than we have. But it’s certainly something we can look at.

I’m not sure what that lawsuit looks like under current law. I, like the Justice Department, have not done the analysis, and I would be very interested in hearing from anyone who has. Whitehouse, however, seems to me to be onto something here. Might a victim of an ISIS attack domestically committed by someone who communicated and plotted using communications architecture specifically designed to be immune, and specifically marketed as immune, from law enforcement surveillance have a claim against the provider who offered that service even after the director of the FBI began specifically warning that ISIS was using such infrastructure to plan attacks? To the extent such companies have no liability in such circumstances, is that the distribution of risk that we as a society want? And might the possibility of civil liability, either under current law or under some hypothetical change to current law, incentivize the development of secure systems that are nonetheless subject to surveillance under limited circumstances?

Why don’t we make the corporations liable, these two security hawks ask!!!

This, at a time when the cybersecurity solution on the table (CISA and other cybersecurity bills) gives corporations overly broad immunity from liability.

Think about that.

While Wittes hasn’t said whether he supports the immunity bills on the table, Paul Rosenzweig and other Lawfare writers are loudly in favor of expansive immunity. And Sheldon Whitehouse, whose idea this is, has been talking about building in immunity for corporations in cybersecurity plans since 2010.

I get there is a need for limited protection for corporations that help the Federal government spy (especially if they’re required to help), which is what liability is always about. I also get that every time we award it, it keeps getting bigger, and years later we discover that immunity covers fairly audacious spying far beyond the ostensible intent of the bill. Though CISA doesn’t even hide that this data will be used for purposes far beyond cybersecurity.

Far, far more importantly, however, one of the problems with the cyber bills on the table is by awarding this immunity, they’re creating a risk calculation for corporations to be sloppy. Sure, there will still be reputational damage every time a corporation exposes its customers’ data to hackers. But we’ve seen in the financial sector — where at least bank regulators require certain levels of hygiene and reporting — bank immunity tied to these reporting requirements appears to have made it impossible to prosecute egregious bank crime.

The banks have learned (and they will be key participants in CISA) that they can obtain impunity by sharing promiscuously (or even not so promiscuously) with the government.

And unlike those bank reporting laws, CISA doesn’t require hygiene. It doesn’t require that corporations deploy basic defenses before obtaining their immunity for information sharing.

If liability is such a great idea, then why aren’t these men pushing the use of liability as a tool to improve our cyberdefenses, rather than (on Whitehouse’s part, at least) calling for the opposite?

Indeed, if this is about appropriately balancing risk, there is no way you can use liability to get corporations to weigh the value of back doors for law enforcement, without at the same time ensuring all corporations also bear full liability for any insecurity in their system, because otherwise corporations won’t be weighing the two sides.

Using liability as a tool might be a clever idea. But using it only for law enforcement back doors does nothing to identify the appropriate balance.

FBI’s 26-Day Old OPM FLASH Notice

Shane Harris, who has been closely tracking the bureaucratic implications of the OPM hack, has an update describing a “FLASH” notice FBI just sent out to the private sector.

Or rather, FBI just re-sent the FLASH notice they sent on June 5, 26 days earlier, because they realized some recipients (including government contractors working on classified projects) did not have their filters set to accept such notices from the FBI.

The FBI is warning U.S. companies to be on the lookout for a malicious computer program that has been linked to the hack of the Office of Personnel Management. Security experts say the malware is known to be used by hackers in China, including those believed to be behind the OPM breach.

The FBI warning, which was sent to companies Wednesday, includes so-called hash values for the malware, called Sakula, that can be used to search a company’s systems to see if they’ve been affected.

The warning, known as an FBI Liaison Alert System, or FLASH, contains technical details of the malware and describes how it works. While the message doesn’t mention the OPM hack, the Sakula malware is used by Chinese hacker groups, according to security experts. And the FBI message is identical to one the bureau sent companies on June 5, a day after the Obama administration said the OPM had been hacked, exposing millions of government employees’ personal information. Among the recipients of both alerts are government contractors working on sensitive and classified projects.

[snip]

In an email obtained by The Daily Beast, the FBI said it was sending the alert again because of concerns that not all companies had received it the first time. Apparently, some of their email filters weren’t configured to let the FBI message through.

Consider the implications of this.

It is unsurprising that the initial FLASH got stuck in companies’ email filters if the hashes included with the notice were treated as suspicious code by the companies’ anti-malware screens. The message likely looked like malware because it is. (Of course, this story may now have alerted those trying to hack recipients of FBI’s FLASH notices that the FBI wasn’t previously whitelisted by recipients, but probably just got whitelisted, but that’s a matter for another day.)

The delayed FLASH receipt says far more about the current state of data-sharing, just as the Senate sets to debate the Cybersecurity Information Sharing Act, which (Senate boosters claim) companies ostensibly need before they’re willing to share data with the government.

First, it suggests that FBI either did not send out such a FLASH in response to what it learned from Anthem hack, which presumably would have gone out at least by February (which, if even OPM had acted on the alert, might have identified its hack 2 months before it did get identified), or if it did it also got stuck in companies’ — and OPM’s — malware filter.

But it also seems to suggest that the private sector — including sensitive government contractors — haven’t been receiving other FBI FLASHes (presuming the filter settings have been set to exclude any such notice including something that looked like malware). They either never noticed they weren’t getting them or never bothered to set their filters to receive them.

That may reflect a larger issue, though. As Jennifer Granick has repeatedly noted, key researchers and corporations have not, up to now anyway, seen much value in sharing with the government.

I’ve been told by many entities, corporate and academic, that they don’t share with the government because the government doesn’t share back. Silicon Valley engineers have wondered aloud what value DHS has to offer in their efforts to secure their employer’s services. It’s not like DHS is setting a great security example for anyone to follow. OPM’s Inspector General warned the government about security problems that, left unaddressed, led to the OPM breach.

Perhaps recipients didn’t have their filters set to accept notices from FBI because none of them have ever been useful?

Another factor behind reluctance to share with the government is an unwillingness to get personnel security clearances, though that should not be a factor here.

The implication appears to be, though, that the government was unable — because of recipient behavior and predispositions — to share information on the most important hack of recent years.

We’re about to have a debate about immunizing corporations further, as if that’s the problem. But this delayed FLASH strongly suggests it is not.

CISA Hack of the Day: White House Can Already Share Intelligence with the State Department

In about 10 days, Congress will take up cyber information sharing bills. And unlike past attempts, these bills are likely to pass.

That, in spite of the fact that no one has yet explained how they’ll make a significant difference in preventing hacks.

So I’m going to try to examine roughly one hack a day that immunized swift information sharing between the government and the private sector wouldn’t prevent.

Yesterday, for example, CNN reported that Russia had hacked “sensitive parts” (read, unclassified) of the White House email system.

While the White House has said the breach only affected an unclassified system, that description belies the seriousness of the intrusion. The hackers had access to sensitive information such as real-time non-public details of the president’s schedule. While such information is not classified, it is still highly sensitive and prized by foreign intelligence agencies, U.S. officials say.

The White House in October said it noticed suspicious activity in the unclassified network that serves the executive office of the president. The system has been shut down periodically to allow for security upgrades.

The FBI, Secret Service and U.S. intelligence agencies are all involved in investigating the breach, which they consider among the most sophisticated attacks ever launched against U.S. government systems. ​The intrusion was routed through computers around the world, as hackers often do to hide their tracks, but investigators found tell-tale codes and other markers that they believe point to hackers working for the Russian government.

The hackers — whether they really are Russian government operatives or not — managed the hack by first hacking the State Department and then phishing an account at the White House using a State email.

To get to the White House, the hackers first broke into the State Department, investigators believe.

The State Department computer system has been bedeviled by signs that despite efforts to lock them out, the Russian hackers have been able to reenter the system. One official says the Russian hackers have “owned” the State Department system for months and it is not clear the hackers have been fully eradicated from the system.

As in many hacks, investigators believe the White House intrusion began with a phishing email that was launched using a State Department email account that the hackers had taken over, according to the U.S. officials.

In other words, the hackers breached the White House by first hacking State — a hack that was well known to the government — and then duping some schmoe at the White House to compromise their email.

Now, unless things have gone really haywire in the government, nothing prevents the State Department from sharing information with the White House. Indeed, NSA and DHS should have an active role in both hacks. Nor would anything prevent NSA from sharing information on the proxy computers used by the hackers. And if NSA can’t find those, we have other problems.

Finally, there’s little a private company could tell the White House to get its schmoes to be a bit more cautious about the email they get (though I suspect in both State and the White House, it is hard to balance responsiveness with adequate skepticism to odd emails).

In other words, CISA would do nothing to prevent this hack of the White House. But nevertheless, Congress is going to rush through this bill without fixing other more basic vulnerabilities.

On CISA the Surveillance Bill

After the Senate Intelligence Committee passed CISA, its sole opponent, Ron Wyden, said, “If information-sharing legislation does not include adequate privacy protections then that’s not a cybersecurity bill – it’s a surveillance bill by another name.” Robert Graham, an expert on intrusion-prevention, argues, “This is a bad police-state thing. It will do little to prevent attacks, but do a lot to increase mass surveillance.”

Clearly, some people who have reason to know think this bill doesn’t do what it says, but instead does a lot of what it isn’t admitting.

I want to look at several aspects of the bill from that perspective (this post primarily deals with the SSCI version but the HPSCI version is very similar).

Can our ISPs take countermeasures against us?

First, whom it affects. Ron Wyden has been warning about the common commercial service OLC memo and its impact on the cybersecurity debate for years, suggesting that still secret memo conflicted public’s understanding of “the law” (though he doesn’t say what law that is). While it’s unclear what that OLC memo says, Wyden seems to suggest that Americans have been subject to cybersecurity surveillance that they didn’t know about (perhaps because OLC had interpreted consent where it didn’t exist).

So I think it’s important that at the center of a series of definitions of “entities” in CISA is a definition that would include us, as private entities.

IN GENERAL.—Except as otherwise provided in this paragraph, the term ‘‘private entity’’ means any person or private group, organization, proprietorship, partnership, trust, cooperative, corporation, or other commercial or nonprofit entity, including an officer, employee, or agent thereof.

That’s important because the law permits both monitoring…

(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor—

(A) an information system of such private entity;

(B) an information system of another entity, upon the authorization and written consent of such other entity;

And defensive measures (what the bill has renamed the largely otherwise indistinguishable “countermeasures”) against a private entity that has provided consent to another private entity.

(B) EXCLUSION.—The term ‘‘defensive measure’’ does not include a measure that destroys, renders unusable, or substantially harms an information system or data on an information system not belonging to—

(i) the private entity operating the measure; or

(ii) another entity or Federal entity that is authorized to provide consent and has provided consent to that private entity for operation of such measure.

At a minimum, I think this should raise questions about whether Terms of Service of cable companies and Internet Service Providers and banks and telecoms amount to consent for this kind of monitoring and — in the name of cybersecurity — countermeasures.

Researching more crimes in name of cybersecurity than in name of terror

This is important, because CISA actually permits the use of information collected in the name of “cybersecurity” to be used for more uses than the NSA is permitted to refer it under foreign intelligence collection (though once FBI is permitted to back door search everything, that distinction admittedly disappears). In addition to its use for cybersecurity — which is itself defined broadly enough to mean, in addition, leak and Intellectual Property policing — this “cybersecurity” information can be used for a variety of other crimes.

(iv) the purpose of responding to, or otherwise preventing or mitigating, an imminent threat of death, serious bodily harm, or serious economic harm, including a terrorist act or a use of a weapon of mass destruction;

(v) the purpose of responding to, or otherwise preventing or mitigating, a serious threat to a minor, including sexual exploitation and threats to physical safety; or

(vi) the purpose of preventing, investigating, disrupting, or prosecuting an offense arising out of a threat described in clause (iv) or any of the offenses listed in— (I) section 3559(c)(2)(F) of title 18, United States Code (relating to serious violent felonies); (II) sections 1028 through 1030 of such title (relating to fraud and identity theft); (III) chapter 37 of such title (relating to espionage and censorship); and (IV) chapter 90 of such title (relating to protection of trade secrets).

As a number of people have noted, for CISA data to be used for the purposes suggest both private entities — upon sharing — and the government — on intake —  actually will be leaving a fair amount of data in place.

Why does domestic spying have less stringent minimization than foreign spying?

Which brings me to the purported “privacy and civil liberties guidelines” the bill has. The bill mandates that the Attorney General come up with guidelines to protect privacy that will,

(A) limit the impact on privacy and civil liberties of activities by the Federal Government under this Act;

(B) limit the receipt, retention, use, and dissemination of cyber threat indicators containing personal information of or identifying specific persons, including by establishing—

(i) a process for the timely destruction of such information that is known not to be directly related to uses authorized under this Act; and

(ii) specific limitations on the length of any period in which a cyber threat indicator may be retained;

(C) include requirements to safeguard cyber threat indicators containing personal information of or identifying specific persons from unauthorized access or acquisition, including appropriate sanctions for activities by officers, employees, or agents of the Federal Government in contravention of such guidelines;

(D) include procedures for notifying entities and Federal entities if information received pursuant to this section is known or determined by a Federal entity receiving such information not to constitute a cyber threat indicator;

(E) protect the confidentiality of cyberthreat indicators containing personal information of or identifying specific persons to the greatest extent practicable and require recipients to be informed that such indicators may only be used for purposes authorized under this Act; and

(F) include steps that may be needed so that dissemination of cyber threat indicators is consistent with the protection of classified and other sensitive national security information.

It’s worth comparing what would happen here to what happens under both Section 215 (which FBI claims to use for cybersecurity) and FAA (which ODNI has admitted to using for cybersecurity — and indeed, which uses upstream searches to find the very same kind of signatures).

With the former, the FISC had imposed minimization procedures and required the government report on compliance with them. The FISC, not the AG, has set retention periods. And at least for the NSA’s use of Section 215 (which should be the comparison here, since NSA will be one of the agencies getting the data), data must be presumptively minimized. Also, unlikely the phone dragnet data, at least, where data must be certified according to a counterterrorism use, here, data is shared across multiple agencies in real time.

FAA’s minimization procedures also get reviewed by the FISC (though reports back are probably not as stringent, though they are checked yearly). And there’s a whole slew of reporting.

While there is some reporting here, it is bifurcated so that PCLOB, which has no subpoena power, does the actual privacy assessment, whereas the Inspectors General, which are assured they can get information they need (even if DOJ’s Inspector General keeps getting denied data they should get), report solely on numbers and types of usage, without a privacy or even compliance assessment.

One of my favorite parts of CISA (this is true of both bills) is that while the bills mandate an auditing ability, they don’t actual mandate audits (the word appears exactly once in both bills).

In other words, Congress is about to adopt a more permissive collection of data for domestic spying than it does for foreign spying. Or, in the context of Section 215, it may be adopting more permissive treatment of data voluntarily turned over to the government than that data turned over in response to an order.

And all that’s before you consider data flowing in the reverse direction. While the bills do require penalties if a government employee or agent (which hopefully includes the contractors this bill will spawn) abuses this data sharing, it does not for private entities. (The House version also has a 2 year statute of limitations for this provision, which all but guarantees it will never be used, given that it would never be discovered in that period, particularly given the way FOIA and Trade Secret exemptions make this data sharing less accessible even than spying data.)

Perhaps my very favorite part of this bill appears only in the House version (which of course came after the Senate version elicited pretty universal complaints that it was a surveillance bill from civil libertarians). It has several versions of this clause.

(a) PROHIBITION OF SURVEILLANCE.—Nothing in this Act or the amendments made by this Act shall be construed to authorize the Department of Defense or the National Security Agency or any other element of the intelligence community to target a person for surveillance.

The word “surveillance,” divorced from the modifier “electronic” is pretty meaningless in this context. And it’s not defined here.

So basically HPSCI, having seen how many people correctly ID this as a surveillance bill, has just taken a completely undefined term “surveillance” and prohibited that under this bill. So you can collect all the content you want under this bill with no warrant, to you can supersede ECPA all you want too, but just don’t call it surveillance.