I’ve been wracking my brain to understand why the Intel Community has been pushing CISA so aggressively.
I get why the Chamber of Commerce is pushing it: because it sets up a regime under which businesses will get broad regulatory immunity in exchange for voluntarily sharing their customers’ data, even if they’re utterly negligent from a security standpoint, while also making it less likely that information their customers could use to sue them would become public. For the companies, it’s about sharply curtailing the risk of (charitably) having imperfect network security or (more realistically, in some cases) being outright negligent. CISA will minimize some of the business costs of operating in an insecure environment.
But why — given that it makes it more likely businesses will wallow in negligence — is the IC so determined to have it, especially when generalized sharing of cyber threat signatures has proven ineffective in preventing attacks, and when there are far more urgent things the IC should be doing to protect themselves and the country?
Richard Burr and Dianne Feinstein’s move the other day to — in the guise of ensuring DHS get to continue to scrub data on intake, instead give the rest of the IC veto power over that scrub (which almost certainly means the bill is substantially a means of eliminating the privacy role DHS currently plays) — leads me to believe the IC plans to use this as they might have used (or might be using) a cyber certification under upstream 702.
Since NYT and ProPublica caught up to my much earlier reporting on the use of upstream 702 for cyber, people have long assumed that CISA would work with upstream 702 authority to magnify the way upstream 702 works. Jonathan Mayer described how this might work.
This understanding of the NSA’s domestic cybersecurity authority leads to, in my view, a more persuasive set of privacy objections. Information sharing legislation would create a concerning surveillance dividend for the agency.
Because this flow of information is indirect, it prevents businesses from acting as privacy gatekeepers. Even if firms carefully screen personal information out of their threat reports, the NSA can nevertheless intercept that information on the Internet backbone.
Note that Mayer’s model assumes the Googles and Verizons of the world make an effort to strip private information, then NSA would use the signature turned over to the government under CISA to go get the private information just stripped out. But Mayer’s model — and the ProPublica/NYT story — never considered how the 2011 John Bates ruling on upstream collection might hinder that model, particularly as it pertains to domestically collected data.
As I laid out back in June, NSA’s optimistic predictions they’d soon get an upstream 702 certificate for cyber came in the wake of John Bates’ October 3, 2011 ruling that the NSA had illegally collected US person data. Of crucial importance, Bates judged that data obtained in response to a particular selector was intentionally, not incidentally, collected (even though the IC and its overseers like to falsely claim otherwise), even data that just happened to be collected in the same transaction. Crucially, pointing back to his July 2010 opinion on the Internet dragnet, Bates said that disclosing such information, even just to the court or internally, would be a violation of 50 USC 1809(a), which he used as leverage to make the government identify and protect any US person data collected using upstream collection before otherwise using the data. I believe this decision established a precedent for upstream 702 that would make it very difficult for FISC to permit the use of cyber signatures that happened to be collected domestically (which would count as intentional domestic collection) without rigorous minimization procedures.
The government, at a time when it badly wanted a cyber certificate, considered appealing his decision, but ultimately did not. Instead, they destroyed the data they had illegally collected and — in what was almost certainly a related decision — destroyed all the PATRIOT-authorized Internet dragnet data at the same time, December 2011. Bates did permit the government to keep collecting upstream data, but only under more restrictive minimization procedures.
Neither ProPublica/NYT nor Mayer claimed NSA had obtained an upstream cyber certificate (though many other people have assumed it did). We actually don’t know, and the evidence is mixed.
Even as the government was scrambling to implement new upstream minimization procedures to satisfy Bates’ order, NSA had another upstream violation. That might reflect informing Bates, for the first time (there’s no sign they did inform him during the 2011 discussion, though the 2011 minimization procedures may reflect that they already had), they had been using upstream to collect on cyber signatures, or one which might represent some other kind of illegal upstream collection. When the government got Congress to reauthorize FAA that year, it did not inform them they were using or intended to use upstream collection to collect cyber signatures. Significantly, even as Congress began debating FAA, they considered but rejected the first of the predecessor bills to CISA.
My guess is that the FISC did approve cyber collection, but did so with some significant limitations on it, akin to, or perhaps even more restrictive, than the restrictions on multiple communication transactions (MCTs) required in 2011. I say that, in part, because of language in USA F-ReDux (section 301) permitting the government to use information improperly collected under Section 702 if the FISA Court imposed new minimization procedures. While that might have just referred back to the hypothetical 2011 example (in which the government had to destroy all the data), I think it as likely the Congress was trying to permit the government to retain data questioned later.
Additionally, nothing in these procedures shall restrict NSA’s ability to conduct vulnerability or network assessments using information acquired pursuant to section 702 of the Act in order to ensure that NSA systems are not or have not been compromised. Notwithstanding any other section in these procedures, information used by NSA to conduct vulnerability or network assessments may be retained for one year solely for that limited purpose. Any information retained for this purpose may be disseminated only in accordance with the applicable provisions of these procedures.
That is, the FISC approved new procedures that permit the retention of vulnerability information for use domestically, but it placed even more restrictions on it (retention for just one year, retention solely for the defense of that agency’s network, which presumably prohibits its use for criminal prosecution, not to mention its dissemination to other agencies, other governments, and corporations) than it had on MCTs in 2011.
To be sure, there is language in both 2011 and 2014 NSA MPs that permits the agency to retain and disseminate domestic communications if it is necessary to understand a communications security vulnerability.
the communication is reasonably believed to contain technical data base information, as defined in Section 2(i), or information necessary to understand or assess a communications security vulnerability. Such communication may be provided to the FBI and/or disseminated to other elements of the United States Government. Such communications may be retained for a period sufficient to allow a thorough exploitation and to permit access to data that are, or are reasonably believed likely to become, relevant to a current or future foreign intelligence requirement. Sufficient duration may vary with the nature of the exploitation.
But at least on its face, that language is about retaining information to exploit (offensively) a communications vulnerability. Whereas the more recent language — which is far more restrictive — appears to address retention and use of data for defensive purposes.
The 2011 ruling strongly suggested that FISC would interpret Section 702 to prohibit much of what Mayer envisioned in his model. And the addition to the 2014 minimization procedures leads me to believe FISC did approve very limited use of Section 702 for cyber security, but with such significant limitations on it (again, presumably stemming from 50 USC 1809(a)’s prohibition on disclosing data intentionally collected domestically) that the IC wanted to find another way. In other words, I suspect NSA (and FBI, which was working closely with NSA to get such a certificate in 2012) got their cyber certificate, only to discover it didn’t legally permit them to do what they wanted to do.
And while I’m not certain, I believe that in ensuring that DHS’ scrubs get dismantled, CISA gives the IC a way to do what it would have liked to with a FISA 702 cyber certificate.
Let’s go back to Mayer’s model of what the IC would probably like to do: A private company finds a threat, removes private data, leaving just a selector, after which NSA deploys the selector on backbone traffic, which then reproduces the private data, presumably on whatever parts of the Internet backbone NSA has access to via its upstream selection (which is understood to be infrastructure owned by the telecoms).
But in fact, Step 4 of Mayer’s model — NSA deploys the signature as a selector on the Internet backbone — is not done by the NSA. It is done by the telecoms (that’s the Section 702 cooperation part). So his model would really be private business > DHS > NSA > private business > NSA > treatment under NSA’s minimization procedures if the data were handled under upstream 702. Ultimately, the backbone operator is still going to be the one scanning the Internet for more instances of that selector; the question is just how much data gets sucked in with it and what the government can do once it gets it.
And that’s important because CISA codifies private companies’ authority to do that scan.
For all the discussion of CISA and its definition, there has been little discussion of what might happen at the private entities. But the bill affirmatively authorizes private entities to monitor their systems, broadly defined, for cybersecurity purposes.
(a) AUTHORIZATION FOR MONITORING.—
(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor—
(A) an information system of such private entity;
(B) an information system of another entity, upon the authorization and written consent of such other entity;
(C) an information system of a Federal entity, upon the authorization and written consent of an authorized representative of the Federal entity; and
(D) information that is stored on, processed by, or transiting an information system monitored by the private entity under this paragraph.
(2) CONSTRUCTION.—Nothing in this subsection shall be construed—
(A) to authorize the monitoring of an information system, or the use of any information obtained through such monitoring, other than as provided in this title; or
(B) to limit otherwise lawful activity.
Defining monitor this way:
(14) MONITOR.—The term ‘‘monitor’’ means to acquire, identify, or scan, or to possess, information that is stored on, processed by, or transiting an information system.
That is, CISA affirmatively permits private companies to scan, identify, and possess cybersecurity threat information transiting or stored on their systems. It permits private companies to conduct precisely the same kinds of scans the government currently obligates telecoms to do under upstream 702, including data both transiting their systems (which for the telecoms would be transiting their backbone) or stored in its systems (so cloud storage). To be sure, big telecom and Internet companies do that anyway for their own protection, though this bill may extend the authority into cloud servers and competing tech company content that transits the telecom backbone. And it specifically does so in anticipation of sharing the results with the government, with very limited requirement to scrub the data beforehand.
Thus, CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.
Of course, there’s no requirement that the telecoms scan for every signature the government shares with it and share the results with the government. Though both Verizon and AT&T have a significant chunk of federal business — which just got put out for rebid on a contract that will amount to $50 billion — and they surely would be asked to scan the networks supporting federal traffic for those signatures (remember, this entire model of scanning domestic backbone traffic got implicated in Qwest losing a federal bid which led to Joe Nacchio’s prosecution), so they’ll be scanning some part of the networks they operate with the signatures. CISA just makes it clear they can also scan their non-federal backbone as well if they want to. And the telecoms are outspoken supporters of CISA, so we should presume they plan to share promiscuously under this bill.
Assuming they do so, CISA offers several more improvements over FISA.
First — perhaps most important for the government — there are no pesky judges. The FISC gets a lot of shit for being a rubber stamp, but for years judges have tried to keep the government operating in the vicinity of the Fourth Amendment through its role in reviewing minimization procedures. Even John Bates, who was largely a pushover for the IC, succeeded in getting the government to agree that it can’t disseminate domestic data that it intentionally collected. And if I’m right that the FISC gave the government a cyber certificate but sharply limited how it could use that data, then it did so on precisely this issue. Significantly, CISA continues a trend we already saw in USA F-ReDux, wherein the Attorney General gets to decide whether privacy procedures (no longer named minimization procedures!) are adequate, rather than a judge. Equally significant, while CISA permits the use of CISA-collected data for a range of prosecutions, unlike FISA, it requires no notice to defendants of where the government obtained that data.
In lieu of judges, CISA envisions PCLOB and Inspectors General conducting the oversight (as well as audits being possible though not mandated). As I’ll show in a follow-up post, there are some telling things left out of those reviews. Plus, the history of DOJ’s Inspector General’s efforts to exercise oversight over such activities offers little hope these entities, no matter how well-intentioned, will be able to restrain any problematic practices. After all, DOJ’s IG called out the FBI in 2008 for not complying with a 2006 PATRIOT Act Reauthorization requirement to have minimization procedures specific to Section 215, but it took until 2013, with three years of intercession from FISC and leaks from Edward Snowden, before FBI finally complied with that 2006 mandate. And that came before FBI’s current practice of withholding data from its IG and even some information in IG reports from Congress.
In short, given what we know of the IC’s behavior when there was a judge with some leverage over its actions, there is absolutely zero reason to believe that any abuses would be stopped under a system without any judicial oversight. The Executive Branch cannot police itself.
Finally, there’s the question of what happens at DHS. No matter what you think about NSA’s minimization procedures (and they do have flaws), they do ensure that data that comes in through NSA doesn’t get broadly circulated in a way that identifies US persons. The IC has increasingly bypassed this control since 2007 by putting FBI at the front of data collection, which means data can be shared broadly even outside of the government. But FISC never permitted the IC to do this with upstream collection. So any content (metadata was different) on US persons collected under upstream collection would be subjected to minimization procedures.
This CISA model eliminates that control too. After all, CISA, as written, would let FBI and NSA veto any scrub (including of content) at DHS. And incoming data (again, probably including content) would be shared immediately not only with FBI (which has been the vehicle for sharing NSA data broadly) but also Treasury and ODNI, which are both veritable black holes from a due process perspective. And what few protections for US persons are tied to a relevance standard that would be met by virtue of a tie to that selector. Thus, CISA would permit the immediate sharing, with virtually no minimization, of US person content across the government (and from there to private sector and local governments).
I welcome corrections to this model — I presume I’ve overstated how much of an improvement over FISA this program would be. But if this analysis is correct, then CISA would give the IC everything that would have wanted for a cybersecurity certificate under Section 702, with none of the inadequate limits that would have had and may in fact have. CISA would provide an administrative way to spy on US person (domestic) content all without any judicial overview.
All of which brings me back to why the IC wants this this much. In at least one case, the IC did manage to use a combination of upstream and PRISM collection to stop an attempt to steal large amounts of data from a defense contractor. That doesn’t mean it’ll be able to do it at scale, but if by offering various kinds of immunity it can get all backbone providers to play along, it might be able to improve on that performance.
But CISA isn’t so much a cybersecurity bill as it is an Internet domestic spying bill, with permission to spy on a range of nefarious activities in cyberspace, including kiddie porn and IP theft. This bill, because it permits the spying on US person content, may be far more useful for that purpose than preventing actual hacks. That is, it won’t fix the hacking problem (it may make it worse by gutting Federal authority to regulate corporate cyber hygiene). But it will help police other kinds of activity.
If I’m right, the IC’s insistence it needs CISA — in the name of, but not necessarily intending to accomplish — cybersecurity makes more sense.
Update: This post has been tweaked for clarity.
Update, November 5: I should have written this post before I wrote this one. In it, I point to language in the August 26, 2014 Thomas Hogan opinion reflecting earlier approval, at least in the FBI minimization procedures, to share cyber signatures with private entities. The first approval was on September 20, 2012. The FISC approved the version still active in 2014 on August 30, 2013. (See footnote 19.) That certainly suggests FISC approved cyber sharing more broadly than the 2011 opinion might have suggested, though I suspect it still included more restrictions than CISA would. Moreover, if the language only got approved for the FBI minimization procedures, it would apply just to PRISM production, given that the FBI does not (or at least didn’t used to) get unminimized upstream production.
I wanted to analyze the Administration’s statement on the Cyber Intelligence Sharing Act, which I’ve reproduced in its entirety below. Opponents of the bill feel the statement betrays Obama’s stated (though usually not performed) commitment to civil liberties. And they point to the statement’s criticism of defensive measures (see the fifth paragraph below) as one reason the President should oppose this bill but isn’t.
Of course, that misconstrues the purpose of such statements, which is to influence the shape of bills as the sausage gets made. As such, this statement commends Richard Burr for concessions he has made, while pointing to the areas where the Administration will push for improvement.
In addition to the defensive measures provision, the chief area the White House is pushing for improvements is on the area where CISA is most vulnerable: on the centrality of DHS to the process.
As such, the Administration supports Senate passage of S. 754, while continuing to work with the Congress as S.754 moves through the legislative process to ensure further important changes are made to the bill, including, but not limited to, preserving the leadership of civilian agencies in domestic cybersecurity.
Focusing real-time sharing through one center at DHS enhances situational awareness, facilitates robust privacy controls, and helps to ensure oversight of such sharing. In addition, centralizing this sharing mechanism through DHS will facilitate more effective real-time sharing with other agencies in the most efficient manner.
Therefore, in order to ensure a focused approach and to facilitate streamlined information sharing while ensuring robust privacy protections, the Administration will strongly oppose any amendments that would provide additional liability-protected sharing channels, including expanding any exceptions to the DHS portal. In addition, the Administration remains concerned that the bill’s authorization to share with any Federal entity, notwithstanding any other provision of law, weakens the bill’s requirement that information be shared with a civilian entity.
Basically, the Administration is still trying to stave off a Tom Cotton effort to let entities share directly with the FBI. Cotton’s amendment is bad — but it mostly just exposes the reality of the bill for what it really is.
Moreover, the White House is nuts if they think the current structure will reflect meaningful involvement from DHS. As I noted the other day — and DailyDot reconfirmed today — other agencies (like the FBI) can veto any meaningful involvement from DHS.
So I’m not really surprised by the content of this statement, and the Administration’s signals they want to push defensive measures and DHS involvement in a particular direction. I am concerned about their apparent analysis of the state of the bill.
An important building block for improving the Nation’s cybersecurity is ensuring that private entities can collaborate to share timely cyber threat information with each other and the Federal Government. In January, the President submitted a legislative proposal to the Congress with the goal of, among other things, facilitating greater information sharing amongst the private sector and with the Federal Government. The Administration’s proposal provides a focused approach to incentivize more cybersecurity information sharing while ensuring the protection of privacy, confidentiality, and civil liberties. As the Administration has previously stated, information sharing legislation must carefully safeguard privacy, confidentiality, and civil liberties, preserve the long-standing respective roles and missions of civilian and intelligence agencies, and provide for appropriate sharing with targeted liability protections. The Administration is encouraged by the strong bipartisan support for cybersecurity information sharing legislation in the Congress.
The Administration appreciates that the Senate Select Committee on Intelligence adopted several amendments to S. 754 to address some of the Administration’s most significant concerns and is further encouraged that the bill’s sponsor has proposed additional changes on the Senate floor. This work has strengthened the legislation and incorporated important modifications to better protect privacy. As such, the Administration supports Senate passage of S. 754, while continuing to work with the Congress as S.754 moves through the legislative process to ensure further important changes are made to the bill, including, but not limited to, preserving the leadership of civilian agencies in domestic cybersecurity.
The Administration supports S. 754’s requirement that an entity sharing information with the Federal Government must share that information through the Department of Homeland Security (DHS) in order to receive liability protections. Moreover, S. 754 requires that such sharing be governed by privacy protection guidelines and that DHS must further disseminate such information in real-time with other Federal agencies. The Administration supports real-time sharing amongst Federal agencies with appropriate privacy protections, and is currently developing such a capability at DHS. Focusing real-time sharing through one center at DHS enhances situational awareness, facilitates robust privacy controls, and helps to ensure oversight of such sharing. In addition, centralizing this sharing mechanism through DHS will facilitate more effective real-time sharing with other agencies in the most efficient manner.
Therefore, in order to ensure a focused approach and to facilitate streamlined information sharing while ensuring robust privacy protections, the Administration will strongly oppose any amendments that would provide additional liability-protected sharing channels, including expanding any exceptions to the DHS portal. In addition, the Administration remains concerned that the bill’s authorization to share with any Federal entity, notwithstanding any other provision of law, weakens the bill’s requirement that information be shared with a civilian entity. This remains a significant concern, and the Administration is eager to work with the Congress to seek a workable solution.
S. 754 authorizes the use of certain potentially disruptive defensive measures in response to network incidents, provisions that were not included in the Administration’s proposal. The use of defensive measures raises significant legal, policy, and diplomatic concerns and, without appropriate safeguards, can have a direct deleterious impact on foreign policy, the integrity of information systems, and cybersecurity. The Administration is encouraged, however, that the bill’s sponsor has proposed changes that would limit an entity from employing a defensive measure that would provide it unauthorized access to another entity’s network. Though the Administration remains concerned that the bill’s authorization to operate defensive measures may prevent the application of other laws such as State common-law tort remedies, it is encouraged that the additional changes will help to appropriately constrain the use of defensive measures. The Administration is committed to continue working with stakeholders to address remaining concerns.
The Administration commends the Committee for recognizing that cybersecurity requires a whole-of-government approach and that information must be appropriately shared within the Federal Government. This sharing must be consistent with certain narrow cybersecurity use restrictions, as well as privacy, confidentiality, and civil liberties protections and transparent oversight. The Administration commends the Committee for requiring that intra-governmental sharing be governed by a set of policies and procedures developed by the Federal Government to protect privacy and civil liberties. The Administration is encouraged that the bill’s sponsor has proposed changes that would preserve the Federal Government’s ability to implement privacy protective policies and procedures. The Administration is encouraged by changes the bill’s sponsor has proposed to ensure that information sharing provided for in the bill is narrowly focused on the important purpose of this bill, the protection of information systems and information from cybersecurity threats and security vulnerabilities. Finally, the Administration is pleased that S.754 includes provisions that will improve the cybersecurity of Federal networks and systems. Consistent with the bill’s requirements, the Administration will implement this authority in a manner that both enhances cybersecurity and continues to protect the confidentiality, availability, and integrity of Federal agencies’ data.
Information sharing is one piece of a larger suite of legislation needed to provide the private sector, the Federal Government, and law enforcement with the necessary tools to combat cyber threats, and create for consumers and businesses a strong and consistent notification standard for breaches of personal data. In addition to updating information sharing statutes, the Congress should incorporate privacy, confidentiality protection, and civil liberties safeguards into all aspects of cybersecurity legislation.
This morning, the Senate voted in favor of cloture on the new (this morning) manager’s amendment on CISA.
Here’s the roll call, which was a blowout. Votes against cloture were:
Rand Paul’s amendment — requiring companies to adhere to their contract with customers — failed by a two-thirds margin (I will update with roll call when it’s posted).
One significant change in today’s manager’s amendment was that Sheldon Whitehouse’s crappy CFAA amendment got replaced in its entirety with this language:
SEC. 408. STOPPING THE FRAUDULENT SALE OF FINANCIAL INFORMATION OF PEOPLE OF THE UNITED STATES.
Section 1029(h) of title 18, United States Code, is amended by striking ‘‘title if—’’ and all that follows through ‘‘therefrom.’’ and inserting ‘‘title if the offense involves an access device issued, owned, managed, or controlled by a financial institution, account issuer, credit card system member, or other entity organized under the laws of the United States, or any State, the District of Columbia, or other Territory of the United States.’’
This basically protects Americans’ data if the data is owned by a US entity, regardless of where the attack on it was launched from (which was the unoffensive part of Whitehouse’s CFAA amendment). Given what Tom Carper said yesterday, we still need to be vigilant against it returning in conference, but for now this is a solid compromise.
In Salon, I’ve got my take on the hack of John Brennan’s AOL account by a 13-year old stoner.
While I think it sucks that WikiLeaks posted unredacted data on Brennan’s family, I’m not at all sympathetic to Brennan himself. After all he’s the guy who decided hacking his SSCI overseers would be appropriate. He’s one of the people who’ve been telling us we have no expectation of privacy in the kinds of data hackers obtained from Verizon — alternate phone number, account ID, password, and credit card information — for years.
But most of all, I think we should remember that Brennan left this data on an AOL server through his entire Obama Administration career, which includes 4 years of service as Homeland Security Czar, a position which bears key responsibility for cybersecurity.
Finally, this hack exposes the Director of the CIA exercising almost laughable operational security. The files appear to date from the period leading up to Brennan’s appointment as White House Homeland Security Czar, where a big part of Brennan’s job was to prevent hacks in this country. To think he was storing sensitive documents on an AOL server — AOL! — while in that role, really demonstrates how laughable are the practices of those who purport to be fighting hackers as the biggest threat to the country. For at least 6 years, the Homeland Security Czar, then the CIA Director — one of the key intelligence officials throughout the Obama Administration — left that stuff out there for some teenagers to steal.
Hacking is a serious problem in this country. Like Brennan, private individuals and corporations suffer serious damage when they get hacked (and the OPM hack of Brennan’s materials may be far more serious). Rather than really fixing the problem, the intelligence community is pushing to give corporations regulatory immunity in exchange for sharing information that won’t be all that useful.
A far more useful initial step in securing the country from really basic types of hacking would be for people like Brennan to stop acting in stupid ways, to stop leaving both their own and the public’s sensitive data in places where even stoned kids can obtain it, to provide a good object lesson in how to limit the data that might be available for malicious hackers to steal.
I would add, however, that there’s one more level of responsibility here.
As I noted in my piece, Brennan’s not the only one who got his security clearance application stolen recently. He is joined in that by 21 million other people, most of whom don’t have a key role in cybersecurity and counterintelligence. Most of those 21 million people haven’t even got official notice their very sensitive data got hacked by one of this country’s adversaries — not even those people who might be particularly targeted by China. Like Brennan, the families of those people have all been put at risk. Unlike Brennan, they didn’t get to choose to leave that data sitting on a server.
In fact, John Brennan and his colleagues have not yet put in place a counterintelligence plan to protect those 21 million people.
If it sucks that John Brennan’s kids got exposed by a hacker (and it does), then it sucks even more than people with far fewer protections and authority to fix things got exposed, as well.
John Brennan should focus on that, not on the 13 year old stoner who hacked his AOL account.
A key change — one Burr and Feinstein have highlighted in their comments on the floor — is the integration of DHS even more centrally in the process of the data intake process. Just as one example, the MA adds the Secretary of Homeland Security to the process of setting up the procedures about information sharing.
Not later than 60 days after the date of the enactment of this Act, the Attorney General and the Secretary of Homeland Security shall, in coordination with the heads of the appropriate Federal entities, develop and submit to Congress interim policies and procedures relating to the receipt of cyber threat indicators and defensive measures by the Federal Government. [my emphasis]
That change is applied throughout.
But there’s one area where adding more DHS involvement appears to be just a show: where it permits DHS conduct a scrub of the data on intake (as Feinstein described, this was an attempt to integrate Tom Carper’s and Chris Coons’ amendments doing just that).
This is also an issue DHS raised in response to Al Franken’s concerns about how CISA would affect their current intake procedure.
To require sharing in “real time” and “not subject to any delay [or] modification” raises concerns relating to operational analysis and privacy.
First, it is important for the NCCIC to be able to apply a privacy scrub to incoming data, to ensure that personally identifiable information unrelated to a cyber threat has not been included. If DHS distributes information that is not scrubbed for privacy concerns, DHS would fail to mitigate and in fact would contribute to the compromise of personally identifiable information by spreading it further. While DHS aims to conduct a privacy scrub quickly so that data can be shared in close to real time, the language as currently written would complicate efforts to do so. DHS needs to apply business rules, workflows and data labeling (potentially masking data depending on the receiver) to avoid this problem.
Second, customers may receive more information than they are capable of handling, and are likely to receive large amounts of unnecessary information. If there is no layer of screening for accuracy, DHS’ customers may receive large amounts of information with dubious value, and may not have the capability to meaningfully digest that information.
While the current Cybersecurity Information Sharing Act recognizes the need for policies and procedures governing automatic information sharing, those policies and procedures would not effectively mitigate these issues if the requirement to share “not subject to any delay [or] modification” remains.
To ensure automated information sharing works in practice, DHS recommends requiring cyber threat information received by DHS to be provided to other federal agencies in “as close to real time as practicable” and “in accordance with applicable policies and procedures.”
Effectively, DHS explained that if it was required to share data in real time, it would be unable to scrub out unnecessary and potentially burdensome data, and suggested that the “real time” requirement be changed to “as close to real time as practicable.”
But compare DHS’s concerns with the actual language added to the description of the information-sharing portal (the new language is in italics).
(3) REQUIREMENTS CONCERNING POLICIES AND PROCEDURES.—Consistent with the guidelines required by subsection (b), the policies and procedures developed and promulgated under this subsection shall—
(A) ensure that cyber threat indicators shared with the Federal Government by any entity pursuant to section 104(c) through the real-time process described in subsection (c) of this section—
(i) are shared in an automated manner with all of the appropriate Federal entities;
(ii) are only subject to a delay, modification, or other action due to controls established for such real-time process that could impede real-time receipt by all of the appropriate Federal entities when the delay, modification, or other action is due to controls—
(I) agreed upon unanimously by all of the heads of the appropriate Federal entities;
(II) carried out before any of the appropriate Federal entities retains or uses the cyber threat indicators or defensive measures; and
(III) uniformly applied such that each of the appropriate Federal entities is subject to the same delay, modification, or other action; and
This section permits one of the “appropriate Federal agencies” to veto such a scrub. Presumably, the language only exists in the bill because one of the “appropriate Federal agencies” has already vetoed the scrub. NSA (in the guise of “appropriate Federal agency” DOD) would be the one that would scare people, but such a veto would equally as likely to come from FBI (in the guise of “appropriate Federal agency” DOJ), and given Tom Cotton’s efforts to send this data even more quickly to FBI, that’s probably who vetoed it.
If you had any doubts the Intelligence Community is ordering up what it wants in this bill, the language permitting them a veto on privacy protections should alleviate you of those doubts.
On top of NSA and FBI’s veto authority, there’s an intentional logical problem here. DHS is one of the “appropriate Federal agencies,” but DHS is the entity that would presumably do the scrub. Yet if it can’t retain data before any other agency, it’s not clear how it could do a scrub.
In short, this seems designed to lead people to believe there might be a scrub (or rather, that under CISA, DHS would continue to do the privacy scrub they are currently doing, though they are just beginning to do it automatically) when, for several reasons, that also seems to be ruled out by the bill. And ruled out because one “appropriate Federal agency” (like I said, I suspect FBI) plans to veto such a plan.
So it has taken this Manager’s Amendment to explain why we need CISA: to make sure that DHS doesn’t do the privacy scrubs it is currently doing.
I’ll explain in a follow-up post why it would be so important to eliminate DHS’ current scrub on incoming data.
As I noted yesterday, Ron Wyden objected to unanimous consent on CISA yesterday because Sheldon Whitehouse’s crappy amendment, which makes the horrible CFAA worse, was going to get a vote. Yesterday, it got amended, but as CDT analyzed, it remains problematic and overbroad.
This afternoon, Whitehouse took to the Senate floor to complain mightily that his amendment had been pulled — presumably it was pulled to get Wyden to withdraw his objections. Whitehouse complained as if this were the first time amendments had not gotten a vote, though that happens all the time with amendments that support civil liberties. He raged about the Masters of the Universe who had pulled his amendment, and suggested a pro-botnet conference had forced the amendment to be pulled, rather than people who have very sound reasons to believe the amendment was badly drafted and dangerously expanded DOJ’s authority.
For all Whitehouse’s complaining, though, it’s likely the amendment is not dead. Tom Carper, who as Ranking Member of the Senate Homeland Security Committee would almost certainly be included in any conference on the bill, rose just after Whitehouse. He said if the provision ends up in the bill, “we will conference, I’m sure, with the House and we will have an opportunity to revisit this, so I just hope you’ll stay in touch with those of us who might be fortunate enough to be a conferee.”
Aaron Richard Burr moved the Cyber Intelligence Sharing Act forward by introducing a manager’s amendment that has limited privacy tweaks (permitting a scrub at DHS and limiting the use of CISA information to cyber crimes that nevertheless include to prevent threat to property), with a bunch of bigger privacy fix amendments, plus a Tom Cotton one and a horrible Sheldon Whitehouse one called as non-germane amendments requiring 60 votes.
Other than that, Burr, Dianne Feinstein, and Ron Wyden spoke on the bill.
Burr did some significant goalpost moving. Whereas in the past, he had suggested that CISA might have prevented the Office of Public Management hack, today he suggested CISA would limit how much data got stolen in a series of hacks. His claim is still false (in almost all the hacks he discussed, the attack vector was already known, but knowing it did nothing to prevent the continued hack).
Burr also likened this bill to a neighborhood watch, where everyone in the neighborhood looks out for the entire neighborhood. He neglected to mention that that neighborhood watch would also include that nosy granny type who reports every brown person in the neighborhood, and features self-defense just like George Zimmerman’s neighborhood watch concept does. Worse, Burr suggested that those not participating in his neighborhood watch were had no protection, effectively suggesting that some of the best companies on securing themselves — like Google — were not protecting customers. Burr even suggested he didn’t know anything about the companies that oppose the bill, which is funny, because Twitter opposes the bill, and Burr has a Twitter account.
Feinstein was worse. She mentioned the OPM hack and then really suggested that a series of other hacks — including both the Sony hack and the DDOS attacks on online banking sites that stole no data! — were worse than the OPM hack.
Yes, the Vice Chair of SSCI really did say that the OPM hack was less serious than a bunch of other other hacks that didn’t affect the national security of this country. Which, if I were one of the 21 million people whose security clearance data had been compromised, would make me very very furious.
DiFi also used language that made it clear she doesn’t really understand how the information sharing portal works. She said something like, “Once cyber information enters the portal it will move at machine speed to other federal agencies,” as if a conveyor belt will carry information from DHS to FBI.
Wyden mostly pointed out that this bill doesn’t protect privacy. But he did call out Burr on his goalpost moving on whether the bill would prevent (his old claim) or just limit the damage 0f (his new one) attacks that it wouldn’t affect at all.
Wyden did, however, object to unanimous consent because Whitehouse’s crappy amendment was being given a vote, which led Burr to complain that Wyden wasn’t going to hold this up.
Finally, Burr came back on the floor, not only to bad mouth companies that oppose this bill again (and insist it was voluntary so they shouldn’t care) but also to do what I thought even he wouldn’t do: suggest we need to pass CISA because a 13 year old stoner hacked the CIA Director.
I’ve written about Ed Markey’s SPY Act, one of several efforts to respond to network insecurity in cars. Fred Upton, who represents Kalamazoo, MI, is pushing an alternative version as part of larger reform to the National Highway Traffic Safety Administration. It appears to be an attempt to forestall regulation from other directions. Update: Here’s a draft of the bill.
More importantly, though, the bill establishes a $1 million cap on damages for manufacturers who refuse to have or violate their policy, and it pre-empts FTC action on unfair trade practices (of the sort that just got Wyndham Hotels in trouble).
Car companies are going to opt to pay that $1M instead of telling their customers how they’re using their driving data.
The cybersecurity requirement likewise serves more to protect companies than to impose sound security on them. Whereas Markey’s bill would require certain things from a cybersecurity policy, Upton’s would let the industry to establish a standard, than permit manufacturers to submit their plans that would fulfill “some or all” standards. Once they submitted those plans they would disappear — they couldn’t be FOIAed, and couldn’t be sued by FTC if they violated those terms.
This section exempts vehicle security and integrity plans submitted by manufacturers from Freedom of Information Act requests.
This section provides that a manufacturer that violates its vehicle security and integrity plan is subject to civil penalties. A manufacturer is not subject to those civil penalties (but doesn’t get the liability protections) if it submits a vehicle security and integrity plan that is approved by the Administrator and implements and maintains the best practices identified in their plan. This section provides that the best practices issued by the Council may not provide a basis for or evidence of liability against a manufacturer whose cybersecurity practices are alleged to be inconsistent with the best practices if the manufacturer has not filed a vehicle security and integrity plan and if the plan does not include the cybersecurity practice at issue.
This section also establishes a safe harbor from Section 5 of the Federal Trade Commission Act with respect to the best practices identified and implemented and maintained in the vehicle security and integrity plan submitted by a manufacturer.
In other words, these plans don’t have to be sound if they can get NHTSA’s buy off on them (remember, NHTSA by it own admission doesn’t have software expertise, which was why Toyota got away with its acceleration problem for so long), and once they were in place if the company mostly fulfilled them they would be largely immune from regulation.
Which is why I believe this section does what I’m afraid it does: make it harder for independent researchers to review carmakers code.
This section establishes that it is unlawful for any person to access, without authorization, electronic control units or critical safety systems in a vehicle, or other systems containing driving data either wirelessly or through a wired connection. It establishes a civil penalty of $100,000 for a person who violates this section.
The actual language of the bill does not include a researcher’s exception.
(1) PROHIBITION.—It shall be unlawful for any person to access, without authorization, an electronic control unit or critical system of a motor vehicle, or other system containing driving data for such motor vehicle, either wirelessly or through a wired connection.
It also imposes a penalty for each thing hacked (so doing research would get really expensive quickly).
Update: NHTSA is no more impressed than I am.
The Committee’s discussion draft includes an important focus on cybersecurity, privacy and technology innovations, but the current proposals may have the opposite of their intended effect. By providing regulated entities majority representation on committees to establish appropriate practices and standards, then enshrining those practices as de facto regulations, the proposals could seriously undermine NHTSA’s efforts to ensure safety. Ultimately, the public expects NHTSA, not industry, to set safety standards.
Nor do the privacy people at FTC, which reads the privacy provisions to be even worse than I did.
Like me, it reads the hacking provision to prohibit research, thus leading to less cybersecurity.
By prohibiting such access even for research purposes, this provision would likely disincentivize such research, to the detriment of consumers’ privacy, security, and safety.
And it has the same concerns I do about providing immunity for crappy cybersecurity practices.
Finally, the proposed safe harbor is so broad that it would immunize manufacturers from liability even as to deceptive statements made by manufacturers relating to the best practices that they implement and maintain. For example, false claims on a manufacturer’s website about its use of firewalls, encryption, or other specific security features would not be actionable if these subjects were also covered by the best practices.
In sum, the Commission understands the desire to provide businesses with certainty and incentives, in the form of safe harbors, to implement best practices. However, the security provisions of the discussion draft would allow manufacturers to receive substantial liability protections in exchange for potentially weak best practices instituted by a Council that they control. The proposed legislation, as drafted, could substantially weaken the security and privacy protections that consumers have today.
Many many many outlets are reporting that China has continued conducting economic espionage even after Xi Jinping agreed to stop doing it. They base that claim on this post from CloudStrike, a big cybersecurity contractor that spends a lot of time feeding the press scary stories about hacking.
Here’s the proof they offer:
Over the last three weeks, CrowdStrike Falcon platform has detected and prevented a number of intrusions into our customers’ systems from actors we have affiliated with the Chinese government. Seven of the companies are firms in the Technology or Pharmaceuticals sectors, where the primary benefit of the intrusions seems clearly aligned to facilitate theft of intellectual property and trade secrets, rather than to conduct traditional national-security related intelligence collection which the Cyber agreement does not prohibit.
In addition to preventing these intrusions, the CrowdStrike Falcon platform also provided full visibility into every tool, command and technique used by the adversary. This allowed us to determine that the hackers saw no need to change their usual tradecraft or previously used infrastructure in an attempt to throw off their scent.
The include a timeline showing 9 attempted intrusions into Tech Sector companies, and 2 into Pharma companies since Xi and President Obama signed the hacking agreement.
Now, even assuming that CrowdStrike has accurately labeled these Chinese government hackers (CrowdStrike’s CTO was less confident in an interview with Motherboard) this still is not proof that China has violated the agreement.
After all, the key part of the agreement is on how stolen information gets used — whether it gets used to benefit individual companies or even entire sectors (the latter of which we do in our own spying, but never mind). If CrowdStrike prevented any data from being stolen, then it is impossible to assert that it was being stolen to benefit market actors without more evidence that the hackers were tasked by a market actor. Even the indictment everyone points to as proof that China engages in economic espionage did not allege that the People Liberation’s Army had shared the data involved in the single economic espionage charge with private sector companies, and given that the data in question pertained to nuclear technology ,it’s not something that is proven just because it was stolen in the context of an ongoing relationship with the victim (even if that is a logical presumption to make).
The same is true here. When China hacked Google to spy on dissidents, that was clearly national security spying. When the US hacked Huawei to figure out how to backdoor its equipment, that was clearly national security spying.When the US used Microsoft and Siemens products to carry out StuxNet, the tech companies were merely enabling targets. There are too many reasons to hack tech sector companies for solidly national security purposes to claim, just based on the sector itself, that it was done for economic espionage.
You can’t even point to the 2 Pharma intrusions to make the claim. A list of sites the State Department identified as critical infrastructure from a leaked 2009 cable includes over 25 pharmaceutical sites (including animal Pharma), many of them related to vaccines. If we’re treating pharmaceutical supply and research facilities as critical infrastructure, with the presumed consequent defensive surveillance of those sites, it is tough to argue the Chinese can’t consider our pharmaceutical companies making key drugs to be critical targets. Both can be argued to stem from the same public health concerns.
I’m not saying it’s impossible or even unlikely that these intrusions were attempted economic espionage. I’m saying that this isn’t evidence of it, and that the reporting repeating this claim has been far too credulous.
But that also points to one of the inherent problems with this deal (one pointed to by many people at the time). When last he testified on the subject, Jim Clapper didn’t even claim to have fully attributed the OPM hack. The same attribution and use problems exist here. China may steal data on an important new drug, but that’s not going to be enough to prove they stole it for commercial gain until they release their own copycat of the drug in several years and use it to undercut the US company’s product, and even then that may require a lot more data — collected by spying! — from inside the market companies themselves (in part because China engages in many other means of stealing data which aren’t the subject of a special agreement, which will make even the copycat instance hard to prove came from an intrusion).
China knew that, too, when it signed the agreement. It will take more than evidence of 11 attempted intrusions to prove that China is violating the agreement.
The policy discussion about the many ways that the Cyber Information Sharing Act not only doesn’t do much to prevent the hacking of public and private networks, but in key ways will make it worse, must be making its mark. Because the Financial Services Roundtable, one of the key corporatist groups backing the bill, released this YouTube full of scary warnings but absolutely zero explanation about what CISA might do to increase cybersecurity.
Indeed, the YouTube is so context free, it doesn’t note that Susan Collins, the first person who appears in the video, has called for mandatory reporting from some sectors (notably, aviation), which is not covered in the bill and might be thwarted by the bill. Nor does it mention that the agency of the second person that appears in the video, Department of Homeland Security Secretary Jeh Johnson, has raised concerns about the complexity of the scheme set up in CISA, not to mention privacy concerns. It doesn’t note that the third person shown, House Homeland Security Chair Michael McCaul, favored an approach that more narrowly targeted the information being shared and reinforced the existing DHS structure with his committee’s bill.
Instead of that discussion … “Death, destruction, and devastation!” “Another organization being hacked!” “Costing jobs!” “One half of America affected!” “What is it going to take to do something?!?!?!”
All that fearmongering and only one mention of the phrase “information sharing,” much less a discussion of what the bill in question really does.
In August, the head of the FSR, Tim Pawlenty, was more honest about what this bill does and why his banks like it so much: because it would help to hide corporate negligence.
“If I think you’ve attacked me and I turn that information over to the government, is that going to be subject to the Freedom of Information Act?” he said, highlighting a major issue for senators concerned about privacy.
“If so, are the trial lawyers going to get it and sue my company for negligent maintenance of data or cyber defenses?” Pawlenty continued. “Are my regulators going to get it and come back and throw me in jail, or fine me or sanction me? Is the public going to have access to it? Are my competitors going to have access to it? Are they going to be able to see my proprietary cyber systems in a way that will give up competitive advantage?”
That is, the banks want to share information with the government so it can help those private corporations protect themselves (without paying for it, really, since banks do so well at dodging taxes), without any responsibility or consequences in return. “Are my regulators going to get [information about how banks got attacked] and come back and throw me in jail, or fine me, or sanction me?” the banks’ paid lobbyist worries. As the author of this bill confirmed last week, this bill will undercut regulators’ authority in case of corporate neglect.
The example of banks dodging responsibility in the past — possibly aided by a similar (albeit more rigorous) information sharing regime under the Bank Secrecy Act — provides all the evidence for how stupid this bill would be. We need corporations to start bearing liability for outright negligence. And this bill provides several ways for them to avoid such liability.
Don’t succumb to bankster inciting fear. America will be less safe if you do.