In response to a question Senate Intelligence Committee Chair Richard Burr posed during his committee’s Global Threat hearing yesterday, Jim Comey admitted that “going dark” is “overwhelmingly … a problem that local law enforcement sees” as they try to prosecute even things as mundane as a car accident.
Burr: Can you, for the American people, set a percentage of how much of that is terrorism and how much of that fear is law enforcement and prosecutions that take place in every town in America every day?
Comey: Yeah I’d say this problem we call going dark, which as Director Clapper mentioned, is the growing use of encryption, both to lock devices when they sit there and to cover communications as they move over fiber optic cables is actually overwhelmingly affecting law enforcement. Because it affects cops and prosecutors and sheriffs and detectives trying to make murder cases, car accident cases, kidnapping cases, drug cases. It has an impact on our national security work, but overwhelmingly this is a problem that local law enforcement sees.
Much later in the hearing Burr — whose committee oversees the intelligence but not the law enforcement function of FBI, which functions are overseen by the Senate Judiciary Committee — returned to the issue of encryption. Indeed, he seemed to back Comey’s point — that local law enforcement is facing a bigger problem with encryption than intelligence agencies — by describing District Attorneys from big cities and small towns complaining to him about encryption.
I’ve had more District Attorneys come to me that I have the individuals at this table. The District Attorneys have come to me because they’re beginning to get to a situation where they can’t prosecute cases. This is town by town, city by city, county by county, and state by state. And it ranges from Cy Vance in New York to a rural town of 2,000 in North Carolina.
Of course, the needs and concerns of these District Attorneys are the Senate Judiciary Committee’s job to oversee, not Burr’s. But he managed to make it his issue by calling those local law enforcement officials “those who complete the complement of our intelligence community” in promising to take up the issue (though he did make clear he was not speaking for the committee in his determination on the issue).
One of the responsibilities of this committee is to make sure that those of you at at the table and those that comp — complete the complement of our intelligence community have the tools through how we authorize that you need. [sic]
Burr raised ISIS wannabes and earlier in the hearing Comey revealed the FBI still hadn’t been able to crack one of a number of phones owned by the perpetrators of the San Bernardino attack. And it is important for the FBI to understand whether the San Bernardino attack was directed by people in Saudi Arabia or Pakistan that Tashfeen Malik associated with before coming to this country planning to engage in Jihad.
But only an hour before Jim Comey got done explaining that the real urgency here is to investigate drug cases and car accident cases, not that terrorist attack.
The balance between security, intelligence collection, and law enforcement is going to look different if you’re weighing drug investigations against the personal privacy of millions than if you’re discussing terrorist communications, largely behind closed doors.
Yet Richard Burr is not above pretending this about terrorism when it’s really about local law enforcement.
Way at the end of yesterday’s Senate Intelligence Committee Global Threats hearing, Tom Cotton asked his second leading question permitting an intelligence agency head to ask for surveillance, this time asking Admiral Mike Rogers whether he still wanted Section 702 (the first invited Jim Comey to ask for access to Electronic Communications Transactions Records with National Security Letters, as Chuck Grassley had asked before; Comey was just as disingenuous in his response as the last time he asked).
Curiously, Cotton offered Rogers the opportunity to ask for Section 702 to be passed unchanged. Cotton noted that in 2012, James Clapper had asked for a straight reauthorization of Section 702.
Do you believe that Congress should pass a straight reauthorization of Section 702?
But Rogers (as he often does) didn’t answer that question. Instead, he simply asserted that he needed it.
I do believe we need to continue 702.
At this point, SSCI Chair Richard Burr piped up and noted the committee would soon start the preparation process for passing Section 702, “from the standpoint of the education that we need to do in educating and having Admiral Rogers bring us up to speed on the usefulness and any tweaks that may have to be made.”
Note this discussion comes in the wake of a description of some of the changes made in last year’s certification in this year’s PCLOB status report. That report notes that last year’s certification process approved the following changes:
As the status report implicitly notes, the government has released minimization procedures for all four agencies using Section 702 (in addition to NSA, CIA, and FBI, NCTC has minimization procedures), but it did so by releasing the now-outdated 2014 minimization procedures as the 2015 ones were being authorized. At some point, I expect we’ll see DEA minimization procedures, given that the shutdown of its own dragnet would lead it to rely more on NSA ones, but that’s just a wildarseguess.
According to Medium, Crackas With Attitude just hacked James Clapper and his wife.
One of the group’s hackers, who’s known as “Cracka,” contacted me on Monday, claiming to have broken into a series of accounts connected to Clapper, including his home telephone and internet, his personal email, and his wife’s Yahoo email. While in control of Clapper’s Verizon FiOS account, Cracka claimed to have changed the settings so that every call to his house number would get forwarded to the Free Palestine Movement.
The hacker also sent me a list of call logs to Clapper’s home number. In the log, there was a number listed as belonging to Vonna Heaton, an executive at Ball Aerospace and a former senior executive at the National Geospatial-Intelligence Agency. When I called that number, the woman who picked up identified as Vonna Heaton. When I told her who I was, she declined to answer any questions.
Viscerally, I’m laughing my ass off that Verizon (among others) has shared Clapper’s metadata without his authority. “Not wittingly,” they might say if he asks them about that. But I recognize that it’s actually not a good thing for someone in such a sensitive position to have his metadata exposed (I mean, to the extent that it wasn’t already exposed in the OPM hack).
I would also find some amusement if Clapper ends up being the first public victim of OmniCISA’s regulatory immunity for corporations.
Yahoo and Verizon can self-report this cyber intrusion to DHS, and if they do then the government can’t initiate regulatory action against them for giving inadequate protection from hacking for the Director of National Intelligence’s data.
And whether or not Clapper is the first victim of OmniCISA’s regulatory immunity, he is among the first Americans that the passage of OmniCISA failed to protect from hacking.
Richard Burr has apparently stated publicly that he’s looking into not Marco Rubio’s serial leaking of classified information, but Ted Cruz’s alleged disclosure of classified information at least night’s debate. That’s particularly curious given that Rubio has gotten privileged access to this information on the Senate Intelligence Committee, whereas Cruz has not.
I assume Burr is thinking of this passage, in which Cruz explained how the USA Freedom Act phone program adds to the tools the intelligence community gets.
It strengthened the tools of national security and law enforcement to go after terrorists. It gave us greater tools and we are seeing those tools work right now in San Bernardino.
And in particular, what it did is the prior program only covered a relatively narrow slice of phone calls. When you had a terrorist, you could only search a relatively narrow slice of numbers, primarily land lines.
The USA Freedom Act expands that so now we have cell phones, now we have Internet phones, now we have the phones that terrorists are likely to use and the focus of law enforcement is on targeting the bad guys.
And the reason is simple. What he knows is that the old program covered 20 percent to 30 percent of phone numbers to search for terrorists. The new program covers nearly 100 percent. That gives us greater ability to stop acts of terrorism, and he knows that that’s the case.
Shortly thereafter, Rubio said,
RUBIO: Let me be very careful when answering this, because I don’t think national television in front of 15 million people is the place to discuss classified information.
Of course, that means Burr — who has the most privileged access to this information — just confirmed for ISIS and anyone else who wants to know (like, say, American citizens) that the IC is targeting “Internet phones” as well as the the more limited set of call records the Section 215 phone dragnet used to incorporate, and in doing so getting closer to 100% of “calls” (which includes texting and messaging) in the US.
I’m not sure why Burr would give OpSec tips to our adversaries, all to score political points against Cruz. Obviously, his tolerance for Rubio’s serial leaks, which effectively confirmed the very same information, shows this isn’t about protecting sources and methods.
Maybe it’s time to boot Burr, in addition to Rubio, from SSCI before he continues to leak classified information?
A penny dropped for me, earlier this week, when Marco Rubio revealed that authorities are asking “a large number of companies” for “phone records.” Then, yesterday, he made it clear that these companies don’t fall under FCC’s definition of “phone” companies, because they’re not subject to that regulator’s 18 month retention requirement.
His comments clear up a few things that have been uncertain since February 2014, when some credulous reporters started reporting that the Section 215 phone dragnet — though they didn’t know enough to call it that — got only 20 to 30% of “all US calls.”
The claim came not long after Judge Richard Leon had declared the 215 phone dragnet to be unconstitutional. It also came just as the President’s Review Group (scoped to include all of the government’s surveillance) and PCLOB (scoped to include only the 215 phone dragnet) were recommending the government come up with a better approach to the phone dragnet.
The report clearly did several things. First, it provided a way for the government to try to undermine the standing claim of other plaintiffs challenging the phone dragnet, by leaving the possibility their records were among the claimed 70% that was not collected. It gave a public excuse the Intelligence Community could use to explain why PRG and PCLOB showed the dragnet to be mostly useless. And it laid the ground work to use “reform” to fix the problems that had, at least since 2009, made the phone dragnet largely useless.
It did not, however, admit the truth about what the 215 phone dragnet really was: just a small part of the far vaster dragnet. The dragnet as a whole aspires to capture a complete record of communications and other metadata indicating relationships (with a focus on locales of concern) that would, in turn, offer the ability to visualize the networks of the world, and not just for terrorism. At first, when the Bush Administration moved the Internet (in 2004) and phone (in 2006) dragnets under FISC authority, NSA ignored FISC’s more stringent rules and instead treated all the data with much more lax EO 12333 rules(see this post for some historical background). When FISC forced the NSA to start following the rules in 2009, however, it meant NSA could no longer do as much with the data collected in the US. So from that point forward, it became even more of a gap-filler than it had been, offering a thinner network map of the US, one the NSA could not subject to as many kinds of analysis. As part of the reforms imposed in 2009, NSA had to start tracking where it got any piece of data and what authority’s rules it had to follow; in response, NSA trained analysts to try to use EO 12333 collected data for their queries, so as to apply the more permissive rules.
That, by itself, makes it clear that EO 12333 and Section 215 (and PRTT) data was significantly redundant. For every international phone call (or at least those to countries of terrorism interest, as the PATRIOT authorities were supposed to be restricted to terrorism and Iran), there might be two or more copies of any given phone call, one collected from a provider domestically, and one collected via a range of means overseas (in fact, the phone dragnet orders make it clear the same providers were also providing international collection not subject to 215). If you don’t believe me on this point, Mike Lee spelled it out last week. Not only might NSA get additional data with the international call — such as location data — but it could subject that data to more interesting analysis, such as co-location. Thus, once the distinction between EO 12333 and PATRIOT data became formalized in 2009 (years after it should have been) the PATRIOT data served primarily to get a thinner network map of the data they could only collect domestically.
Because the government didn’t want to admit they had a dragnet, they never tried to legislate fixes for it such that it would be more comprehensive in terms of reach or more permissive in terms of analysis.
So that’s a big part of why four beat journalists got that leak in February 2014, at virtually the same time President Obama decided to replace the 215 phone dragnet with something else.
The problem was, the government never admitted the extent of what they wanted to do with the dragnet. It wasn’t just telephony-carried voice calls they wanted to map, it was all communications a person might make from their phone, which increasingly means a smart phone. It wasn’t just call-chaining they wanted to do, it was connection chaining, linking identities, potentially using far more intrusive technological analysis.
Some of that was clear with the initial IC effort at “reform.” Significantly, it didn’t ask for Call Detail Records, understood to include either phone or Internet or both, but instead “records created as a result of communications of an individual or facility.” That language would have permitted the government to get backbone providers to collect all addressing records, regardless if it counted as content. The bill also permitted the use of such tools for all purposes, not just counterterrorism. In effect, this bill would have completed the dragnet, permitting the IC to conduct EO 12333 collection and analysis on records collected in the US, for any “intelligence” purpose.
But there was enough support for real reform, demonstrated most vividly in the votes on Amash-Conyers in July 2013, that whatever got passed had to look like real reform, so that effort was killed.
So we got the USA F-ReDux model, swapping more targeted collection (of communications, but not other kinds of records, which can still be collected in bulk) for the ability to require providers to hand over the data in usable form. This meant the government could get what it wanted, but it might have to work really hard to do so, as the communications provider market is so fragmented.
The GOP recognized, at least in the weeks before the passage of the bill, that this would be the case. I believe that Richard Burr’s claimed “mistake” in claiming there was an Internet dragnet was instead an effort to create legislative intent supporting an Internet dragnet. After that failed, Burr introduced a last minute bill using John Bates’ Dialing, Routing, Addressing, and Signaling language, meaning it would enable the government to bulk collect packet communications off switches again, along with EO 12333 minimization rules. That failed (in part because of Mitch McConnell’s parliamentary screw ups).
But now the IC is left with a law that does what it said it wanted (plus some, as it definitely gets non-telephony “phone” “calls”), rather than one that does what it wanted, which was to re-establish the full dragnet it had in the US at various times in the past.
I would expect they won’t stop trying for the latter, though.
Indeed, I suspect that’s the real reason Marco Rubio has been permitted to keep complaining about the dragnet’s shortcomings.
After receiving a briefing on the San Bernardino attack, Richard Burr went out and made two contradictory claims. First, Burr — and or other sources for The Hill — said that there was no evidence the Tashfeen Malik and Syed Rizwan Farook used encryption.
Lawmakers on Thursday said there was no evidence yet that the two suspected shooters used encryption to hide from authorities in the lead-up to last week’s San Bernardino, Calif., terror attack that killed 14 people.
“We don’t know whether it played a part in this attack,” Senate Intelligence Committee Chairman Richard Burr (R-N.C.) told reporters following a closed-door briefing with federal officials on the shootings.
That’s consistent with what we know so far. After all, a husband and wife wouldn’t need to — or have a way of — encrypting their communications with each other, as it would be mostly face-to-face. The fact that they tried to destroy their devices (and apparently got rid of a still undiscovered hard drive) suggests they weren’t protecting that via encryption, but rather via physical destruction. That doesn’t rule out using both, but the FBI would presumably know if the devices they’re reconstructed were encrypted.
So it makes sense that the San Bernardino attacks did not use encryption.
But then later in the same discussion with reporters, Burr suggested Malik and Farook must have used encryption because the IC didn’t know about their attack.
Burr suggested it might have even played a role in the accused San Bernardino shooters — Tashfeen Malik and Syed Rizwan Farook — going unnoticed for years, despite the FBI saying they had been radicalized for some time.
“Any time you glean less information at the beginning, clearly encryption probably played a role in it,” he said. “And there were a lot of conversations that went on between these two individuals before [Malik] came to the United States that you would love to have some insight to other than after an attack took place.”
This is a remarkable comment!
After all, the FBI and NSA don’t even read all the conversations of foreigners, as Malik would still have legally been, that they can. Indeed, if these conversations were in Arabic or Urdu, the IC would only have had them translated if there were some reason to find them interesting. And even in spite of the pair’s early shooting training, it’s not apparent they had extensive conversations, particularly not online, to guide that training.
Those details would make it likely that the IC would have had no reason to be interested. To say nothing of the fact that ultimately “radicalization” is a state of mind, and thus far, NSA doesn’t have a way to decrypt thoughts.
But this is the second attack in a row, with Paris, where Burr and others have suggested that their lack of foreknowledge of the attack makes it probable the planners used encryption. Burr doesn’t even seem to be considering a number of other things, such as good operational security, languages, and metadata failures might lead the IC to miss warning signs, even assuming they’re collecting everything (there should have been no legal limits to their ability to collect on Malik).
We’re not having a debate about encryption anymore. We’re debating making the Internet less secure to excuse the IC’s less-than-perfect-omniscience.
Charlie Savage has a story that confirms (he linked some of my earlier reporting) something I’ve long argued: NSA was willing to shut down the Internet dragnet in 2011 because it could do what it wanted using other authorities. In it, Savage points to an NSA IG Report on its purge of the PRTT data that he obtained via FOIA. The document includes four reasons the government shut the program down, just one of which was declassified (I’ll explain what is probably one of the still-classified reasons probably in a later post). It states that SPCMA and Section 702 can fulfill the requirements that the Internet dragnet was designed to meet. The government had made (and I had noted) a similar statement in a different FOIA for PRTT materials in 2014, though this passage makes it even more clear that SPCMA — DOD’s self-authorization to conduct analysis including US persons on data collected overseas — is what made the switch possible.
It’s actually clear there are several reasons why the current plan is better for the government than the previous dragnet, in ways that are instructive for the phone dragnet, both retrospectively for the USA F-ReDux debate and prospectively as hawks like Tom Cotton and Jeb Bush and Richard Burr try to resuscitate an expanded phone dragnet. Those are:
Both the domestic Internet and phone dragnet limited their use to counterterrorism. While I believe the Internet dragnet limits were not as stringent as the phone ones (at least in pre 2009 shutdown incarnation), they both required that the information only be disseminated for a counterterrorism purpose. The phone dragnet, at least, required someone sign off that’s why information from the dragnet was being disseminated.
Admittedly, when the FISC approved the use of the phone dragnet to target Iran, it was effectively authorizing its use for a counterproliferation purpose. But the government’s stated admissions — which are almost certainly not true — in the Shantia Hassanshahi case suggest the government would still pretend it was not using the phone dragnet for counterproliferation purposes. The government now claims it busted Iranian-American Hassanshahi for proliferating with Iran using a DEA database rather than the NSA one that technically would have permitted the search but not the dissemination, and yesterday Judge Rudolph Contreras ruled that was all kosher.
But as I noted in this SPCMA piece, the only requirement for accessing EO 12333 data to track Americans is a foreign intelligence purpose.
Additionally, in what would have been true from the start but was made clear in the roll-out, NSA could use this contact chaining for any foreign intelligence purpose. Unlike the PATRIOT-authorized dragnets, it wasn’t limited to al Qaeda and Iranian targets. NSA required only a valid foreign intelligence justification for using this data for analysis.
The primary new responsibility is the requirement:
- to enter a foreign intelligence (FI) justification for making a query or starting a chain,[emphasis original]
Now, I don’t know whether or not NSA rolled out this program because of problems with the phone and Internet dragnets. But one source of the phone dragnet problems, at least, is that NSA integrated the PATRIOT-collected data with the EO 12333 collected data and applied the protections for the latter authorities to both (particularly with regards to dissemination). NSA basically just dumped the PATRIOT-authorized data in with EO 12333 data and treated it as such. Rolling out SPCMA would allow NSA to use US person data in a dragnet that met the less-restrictive minimization procedures.
That means the government can do chaining under SPCMA for terrorism, counterproliferation, Chinese spying, cyber, or counter-narcotic purposes, among others. I would bet quite a lot of money that when the government “shut down” the DEA dragnet in 2013, they made access rules to SPCMA chaining still more liberal, which is great for the DEA because SPCMA did far more than the DEA dragnet anyway.
So one thing that happened with the Internet dragnet is that it had initial limits on purpose and who could access it. Along the way, NSA cheated those open, by arguing that people in different function areas (like drug trafficking and hacking) might need to help out on counterterrorism. By the end, though, NSA surely realized it loved this dragnet approach and wanted to apply it to all NSA’s functional areas. A key part of the FISC’s decision that such dragnets were appropriate is the special need posed by counterterrorism; while I think they might well buy off on drug trafficking and counterproliferation and hacking and Chinese spying as other special needs, they had not done so before.
The other thing that happened is that, starting in 2008, the government started putting FBI in a more central role in this process, meaning FBI’s promiscuous sharing rules would apply to anything FBI touched first. That came with two benefits. First, the FBI can do back door searches on 702 data (NSA’s ability to do so is much more limited), and it does so even at the assessment level. This basically puts data collected under the guise of foreign intelligence at the fingertips of FBI Agents even when they’re just searching for informants or doing other pre-investigative things.
In addition, the minimization procedures permit the FBI (and CIA) to copy entire metadata databases.
FBI can “transfer some or all such metadata to other FBI electronic and data storage systems,” which seems to broaden access to it still further.
Users authorized to access FBI electronic and data storage systems that contain “metadata” may query such systems to find, extract, and analyze “metadata” pertaining to communications. The FBI may also use such metadata to analyze communications and may upload or transfer some or all such metadata to other FBI electronic and data storage systems for authorized foreign intelligence or law enforcement purposes.
In this same passage, the definition of metadata is curious.
For purposes of these procedures, “metadata” is dialing, routing, addressing, or signaling information associated with a communication, but does not include information concerning the substance, purport, or meaning of the communication.
I assume this uses the very broad definition John Bates rubber stamped in 2010, which included some kinds of content. Furthermore, the SMPs elsewhere tell us they’re pulling photographs (and, presumably, videos and the like). All those will also have metadata which, so long as it is not the meaning of a communication, presumably could be tracked as well (and I’m very curious whether FBI treats location data as metadata as well).
Whereas under the old Internet dragnet the data had to stay at NSA, this basically lets FBI copy entire swaths of metadata and integrate it into their existing databases. And, as noted, the definition of metadata may well be broader than even the broadened categories approved by John Bates in 2010 when he restarted the dragnet.
So one big improvement between the old domestic Internet dragnet and SPCMA (and 702 to a lesser degree, and I of course, improvement from a dragnet-loving perspective) is that the government can use it for any foreign intelligence purpose.
At several times during the USA F-ReDux debate, surveillance hawks tried to use the “reform” to expand the acceptable uses of the dragnet. I believe controls on the new system will be looser (especially with regards to emergency searches), but it is, ostensibly at least, limited to counterterrorism.
One way USA F-ReDux will be far more liberal, however, is in dissemination. It’s quite clear that the data returned from queries will go (at least) to FBI, as well as NSA, which means FBI will serve as a means to disseminate it promiscuously from there.
Another thing replacing the Internet dragnet with 702 access does it provide another way to correlate multiple identities, which is critically important when you’re trying to map networks and track all the communication happening within one. Under 702, the government can obtain not just Internet “call records” and the content of that Internet communication from providers, but also the kinds of thing they would obtain with a subpoena (and probably far more). As I’ve shown, here are the kinds of things you’d almost certainly get from Google (because that’s what you get with a few subpoenas) under 702 that you’d have to correlate using algorithms under the old Internet dragnet.
Every single one of these data points provides a potentially new identity that the government can track on, whereas the old dragnet might only provide an email and IP address associated with one communication. The NSA has a great deal of ability to correlate those individual identifiers, but — as I suspect the Paris attack probably shows — that process can be thwarted somewhat by very good operational security (and by using providers, like Telegram, that won’t be as accessible to NSA collection).
This is an area where the new phone dragnet will be significantly better than the existing phone dragnet, which returns IMSI, IMEI, phone number, and a few other identifiers. But under the new system, providers will be asked to identify “connected” identities, which has some limits, but will nonetheless pull some of the same kind of data that would come back in a subpoena.
While replacing the domestic Internet dragnet with SPCMA provides additional data with which to do correlations, much of that might fall under the category of additional functionality. There are two obvious things that distinguish the old Internet dragnet from what NSA can do under SPCMA, though really the possibilities are endless.
The first of those is content scraping. As the Intercept recently described in a piece on the breathtaking extent of metadata collection, the NSA (and GCHQ) will scrape content for metadata, in addition to collecting metadata directly in transit. This will get you to different kinds of connection data. And particularly in the wake of John Bates’ October 3, 2011 opinion on upstream collection, doing so as part of a domestic dragnet would be prohibitive.
In addition, it’s clear that at least some of the experimental implementations on geolocation incorporated SPCMA data.
I’m particularly interested that one of NSA’s pilot co-traveler programs, CHALKFUN, works with SPCMA.
Chalkfun’s Co-Travel analytic computes the date, time, and network location of a mobile phone over a given time period, and then looks for other mobile phones that were seen in the same network locations around a one hour time window. When a selector was seen at the same location (e.g., VLR) during the time window, the algorithm will reduce processing time by choosing a few events to match over the time period. Chalkfun is SPCMA enabled1.
1 (S//SI//REL) SPCMA enables the analytic to chain “from,” “through,” or “to” communications metadata fields without regard to the nationality or location of the communicants, and users may view those same communications metadata fields in an unmasked form. [my emphasis]
Now, aside from what this says about the dragnet database generally (because this makes it clear there is location data in the EO 12333 data available under SPCMA, though that was already clear), it makes it clear there is a way to geolocate US persons — because the entire point of SPCMA is to be able to analyze data including US persons, without even any limits on their location (meaning they could be in the US).
That means, in addition to tracking who emails and talks with whom, SPCMA has permitted (and probably still does) permit NSA to track who is traveling with whom using location data.
Finally, one thing we know SPCMA allows is tracking on cookies. I’m of mixed opinion on whether the domestic Internet ever permitted this, but tracking cookies is not only nice for understanding someone’s browsing history, it’s probably critical for tracking who is hanging out in Internet forums, which is obviously key (or at least used to be) to tracking aspiring terrorists.
Most of these things shouldn’t be available via the new phone dragnet — indeed, the House explicitly prohibited not just the return of location data, but the use of it by providers to do analysis to find new identifiers (though that is something AT&T does now under Hemisphere). But I would suspect NSA either already plans or will decide to use things like Supercookies in the years ahead, and that’s clearly something Verizon, at least, does keep in the course of doing business.
All of which is to say it’s not just that the domestic Internet dragnet wasn’t all that useful in its current form (which is also true of the phone dragnet in its current form now), it’s also that the alternatives provided far more than the domestic Internet did.
Jim Comey recently said he expects to get more information under the new dragnet — and the apparent addition of another provider already suggests that the government will get more kinds of data (including all cell calls) from more kinds of providers (including VOIP). But there are also probably some functionalities that will work far better under the new system. When the hawks say they want a return of the dragnet, they actually want both things: mandates on providers to obtain richer data, but also the inclusion of all Americans.
Update: Thought I’d put a list of Senators people should thank for voting against CISA.
GOP: Crapo, Daines, Heller, Lee, Risch, and Sullivan. (Paul voted against cloture but did not vote today.)
Dems: Baldwin, Booker, Brown, Cardin, Coons, Franken, Leahy, Markey, Menendez, Merkley, Sanders, Tester, Udall, Warren, Wyden
Just now, the Senate voted to pass the Cyber Information Sharing Act by a vote of 74 to 21. While 7 more people voted against the bill than had voted against cloture last week (Update: the new votes were Cardin and Tester, Crapo, Daines, Heller, Lee, Risch, and Sullivan, with Paul not voting), this is still a resounding vote for a bill that will authorize domestic spying with no court review in this country.
The amendment voting process was interesting of its own accord. Most appallingly, just after Patrick Leahy cast his 15,000th vote on another amendment — which led to a break to talk about what a wonderful person he is, as well as a speech from him about how the Senate is the conscience of the country — Leahy’s colleagues voted 57 to 39 against his amendment that would have stopped the creation of a new FOIA exemption for CISA. So right after honoring Leahy, his colleagues kicked one of his key issues, FOIA, in the ass.
More telling, though, were the votes on the Wyden and Heller amendments, the first two that came up today.
Wyden’s amendment would have required more stringent scrubbing of personal data before sharing it with the federal government. The amendment failed by a vote of 55-41 — still a big margin, but enough to sustain a filibuster. Particularly given that Harry Reid switched votes at the last minute, I believe that vote was designed to show enough support for a better bill to strengthen the hand of those pushing for that in conference (the House bills are better on this point). The amendment had the support of a number of Republicans — Crapo, Daines, Gardner, Heller, Lee, Murkowksi, and Sullivan — some of whom would vote against passage. Most of the Democrats who voted against Wyden’s amendment — Carper, Feinstein, Heitkamp, Kaine, King, Manchin, McCaskill, Mikulski, Nelson, Warner, Whitehouse — consistently voted against any amendment that would improve the bill (and Whitehouse even voted for Tom Cotton’s bad amendment).
The vote on Heller’s amendment looked almost nothing like Wyden’s. Sure, the amendment would have changed just two words in the bill, requiring the government to have a higher standard for information it shared internally. But it got a very different crowd supporting it, with a range of authoritarian Republicans like Barrasso, Cassidy, Enzi, Ernst, and Hoeven — voting in favor. That made the vote on the bill much closer. So Reid, along with at least 7 other Democrats who voted for Wyden’s amendment, including Brown, Klobuchar, Murphy, Schatz, Schumer, Shaheen, and Stabenow, voted against Heller’s weaker amendment. While some of these Democrats — Klobuchar, Schumer, and probably Shaheen and Stabenow — are affirmatively pro-unconstitutional spying anyway, the swing, especially from Sherrod Brown, who voted against the bill as a whole, makes it clear that these are opportunistic votes to achieve an outcome. Heller’s vote fell just short 49-47, and would have passed had some of those Dems voted in favor (the GOP Presidential candidates were not present, but that probably would have been at best a wash and possibly a one vote net against, since Cruz voted for cloture last week). Ultimately, I think Reid and these other Dems are moving to try to deliver something closer to what the White House wants, which is still unconstitutional domestic spying.
Richard Burr seemed certain that this will go to conference, which means people like he, DiFi, and Tom Carper will try to make this worse as people from the House point out that there are far more people who oppose this kind of unfettered spying in the House. We shall see.
For now, however, the Senate has embraced a truly awful bill.
Update, all amendment roll calls
Cotton amendment: 22-73-5
Final passage: 74-21-5
As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.
So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.
The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.
Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.
Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.
Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.
In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.
Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.
Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).
But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.
Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.
So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.
Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.
Burr and DiFi stripped that entirely.
Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.
Burr and DiFi stripped that really critical measure.
Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.
Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.
That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.
Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).
But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.
I’ve been wracking my brain to understand why the Intel Community has been pushing CISA so aggressively.
I get why the Chamber of Commerce is pushing it: because it sets up a regime under which businesses will get broad regulatory immunity in exchange for voluntarily sharing their customers’ data, even if they’re utterly negligent from a security standpoint, while also making it less likely that information their customers could use to sue them would become public. For the companies, it’s about sharply curtailing the risk of (charitably) having imperfect network security or (more realistically, in some cases) being outright negligent. CISA will minimize some of the business costs of operating in an insecure environment.
But why — given that it makes it more likely businesses will wallow in negligence — is the IC so determined to have it, especially when generalized sharing of cyber threat signatures has proven ineffective in preventing attacks, and when there are far more urgent things the IC should be doing to protect themselves and the country?
Richard Burr and Dianne Feinstein’s move the other day to — in the guise of ensuring DHS get to continue to scrub data on intake, instead give the rest of the IC veto power over that scrub (which almost certainly means the bill is substantially a means of eliminating the privacy role DHS currently plays) — leads me to believe the IC plans to use this as they might have used (or might be using) a cyber certification under upstream 702.
Since NYT and ProPublica caught up to my much earlier reporting on the use of upstream 702 for cyber, people have long assumed that CISA would work with upstream 702 authority to magnify the way upstream 702 works. Jonathan Mayer described how this might work.
This understanding of the NSA’s domestic cybersecurity authority leads to, in my view, a more persuasive set of privacy objections. Information sharing legislation would create a concerning surveillance dividend for the agency.
Because this flow of information is indirect, it prevents businesses from acting as privacy gatekeepers. Even if firms carefully screen personal information out of their threat reports, the NSA can nevertheless intercept that information on the Internet backbone.
Note that Mayer’s model assumes the Googles and Verizons of the world make an effort to strip private information, then NSA would use the signature turned over to the government under CISA to go get the private information just stripped out. But Mayer’s model — and the ProPublica/NYT story — never considered how the 2011 John Bates ruling on upstream collection might hinder that model, particularly as it pertains to domestically collected data.
As I laid out back in June, NSA’s optimistic predictions they’d soon get an upstream 702 certificate for cyber came in the wake of John Bates’ October 3, 2011 ruling that the NSA had illegally collected US person data. Of crucial importance, Bates judged that data obtained in response to a particular selector was intentionally, not incidentally, collected (even though the IC and its overseers like to falsely claim otherwise), even data that just happened to be collected in the same transaction. Crucially, pointing back to his July 2010 opinion on the Internet dragnet, Bates said that disclosing such information, even just to the court or internally, would be a violation of 50 USC 1809(a), which he used as leverage to make the government identify and protect any US person data collected using upstream collection before otherwise using the data. I believe this decision established a precedent for upstream 702 that would make it very difficult for FISC to permit the use of cyber signatures that happened to be collected domestically (which would count as intentional domestic collection) without rigorous minimization procedures.
The government, at a time when it badly wanted a cyber certificate, considered appealing his decision, but ultimately did not. Instead, they destroyed the data they had illegally collected and — in what was almost certainly a related decision — destroyed all the PATRIOT-authorized Internet dragnet data at the same time, December 2011. Bates did permit the government to keep collecting upstream data, but only under more restrictive minimization procedures.
Neither ProPublica/NYT nor Mayer claimed NSA had obtained an upstream cyber certificate (though many other people have assumed it did). We actually don’t know, and the evidence is mixed.
Even as the government was scrambling to implement new upstream minimization procedures to satisfy Bates’ order, NSA had another upstream violation. That might reflect informing Bates, for the first time (there’s no sign they did inform him during the 2011 discussion, though the 2011 minimization procedures may reflect that they already had), they had been using upstream to collect on cyber signatures, or one which might represent some other kind of illegal upstream collection. When the government got Congress to reauthorize FAA that year, it did not inform them they were using or intended to use upstream collection to collect cyber signatures. Significantly, even as Congress began debating FAA, they considered but rejected the first of the predecessor bills to CISA.
My guess is that the FISC did approve cyber collection, but did so with some significant limitations on it, akin to, or perhaps even more restrictive, than the restrictions on multiple communication transactions (MCTs) required in 2011. I say that, in part, because of language in USA F-ReDux (section 301) permitting the government to use information improperly collected under Section 702 if the FISA Court imposed new minimization procedures. While that might have just referred back to the hypothetical 2011 example (in which the government had to destroy all the data), I think it as likely the Congress was trying to permit the government to retain data questioned later.
Additionally, nothing in these procedures shall restrict NSA’s ability to conduct vulnerability or network assessments using information acquired pursuant to section 702 of the Act in order to ensure that NSA systems are not or have not been compromised. Notwithstanding any other section in these procedures, information used by NSA to conduct vulnerability or network assessments may be retained for one year solely for that limited purpose. Any information retained for this purpose may be disseminated only in accordance with the applicable provisions of these procedures.
That is, the FISC approved new procedures that permit the retention of vulnerability information for use domestically, but it placed even more restrictions on it (retention for just one year, retention solely for the defense of that agency’s network, which presumably prohibits its use for criminal prosecution, not to mention its dissemination to other agencies, other governments, and corporations) than it had on MCTs in 2011.
To be sure, there is language in both 2011 and 2014 NSA MPs that permits the agency to retain and disseminate domestic communications if it is necessary to understand a communications security vulnerability.
the communication is reasonably believed to contain technical data base information, as defined in Section 2(i), or information necessary to understand or assess a communications security vulnerability. Such communication may be provided to the FBI and/or disseminated to other elements of the United States Government. Such communications may be retained for a period sufficient to allow a thorough exploitation and to permit access to data that are, or are reasonably believed likely to become, relevant to a current or future foreign intelligence requirement. Sufficient duration may vary with the nature of the exploitation.
But at least on its face, that language is about retaining information to exploit (offensively) a communications vulnerability. Whereas the more recent language — which is far more restrictive — appears to address retention and use of data for defensive purposes.
The 2011 ruling strongly suggested that FISC would interpret Section 702 to prohibit much of what Mayer envisioned in his model. And the addition to the 2014 minimization procedures leads me to believe FISC did approve very limited use of Section 702 for cyber security, but with such significant limitations on it (again, presumably stemming from 50 USC 1809(a)’s prohibition on disclosing data intentionally collected domestically) that the IC wanted to find another way. In other words, I suspect NSA (and FBI, which was working closely with NSA to get such a certificate in 2012) got their cyber certificate, only to discover it didn’t legally permit them to do what they wanted to do.
And while I’m not certain, I believe that in ensuring that DHS’ scrubs get dismantled, CISA gives the IC a way to do what it would have liked to with a FISA 702 cyber certificate.
Let’s go back to Mayer’s model of what the IC would probably like to do: A private company finds a threat, removes private data, leaving just a selector, after which NSA deploys the selector on backbone traffic, which then reproduces the private data, presumably on whatever parts of the Internet backbone NSA has access to via its upstream selection (which is understood to be infrastructure owned by the telecoms).
But in fact, Step 4 of Mayer’s model — NSA deploys the signature as a selector on the Internet backbone — is not done by the NSA. It is done by the telecoms (that’s the Section 702 cooperation part). So his model would really be private business > DHS > NSA > private business > NSA > treatment under NSA’s minimization procedures if the data were handled under upstream 702. Ultimately, the backbone operator is still going to be the one scanning the Internet for more instances of that selector; the question is just how much data gets sucked in with it and what the government can do once it gets it.
And that’s important because CISA codifies private companies’ authority to do that scan.
For all the discussion of CISA and its definition, there has been little discussion of what might happen at the private entities. But the bill affirmatively authorizes private entities to monitor their systems, broadly defined, for cybersecurity purposes.
(a) AUTHORIZATION FOR MONITORING.—
(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor—
(A) an information system of such private entity;
(B) an information system of another entity, upon the authorization and written consent of such other entity;
(C) an information system of a Federal entity, upon the authorization and written consent of an authorized representative of the Federal entity; and
(D) information that is stored on, processed by, or transiting an information system monitored by the private entity under this paragraph.
(2) CONSTRUCTION.—Nothing in this subsection shall be construed—
(A) to authorize the monitoring of an information system, or the use of any information obtained through such monitoring, other than as provided in this title; or
(B) to limit otherwise lawful activity.
Defining monitor this way:
(14) MONITOR.—The term ‘‘monitor’’ means to acquire, identify, or scan, or to possess, information that is stored on, processed by, or transiting an information system.
That is, CISA affirmatively permits private companies to scan, identify, and possess cybersecurity threat information transiting or stored on their systems. It permits private companies to conduct precisely the same kinds of scans the government currently obligates telecoms to do under upstream 702, including data both transiting their systems (which for the telecoms would be transiting their backbone) or stored in its systems (so cloud storage). To be sure, big telecom and Internet companies do that anyway for their own protection, though this bill may extend the authority into cloud servers and competing tech company content that transits the telecom backbone. And it specifically does so in anticipation of sharing the results with the government, with very limited requirement to scrub the data beforehand.
Thus, CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.
Of course, there’s no requirement that the telecoms scan for every signature the government shares with it and share the results with the government. Though both Verizon and AT&T have a significant chunk of federal business — which just got put out for rebid on a contract that will amount to $50 billion — and they surely would be asked to scan the networks supporting federal traffic for those signatures (remember, this entire model of scanning domestic backbone traffic got implicated in Qwest losing a federal bid which led to Joe Nacchio’s prosecution), so they’ll be scanning some part of the networks they operate with the signatures. CISA just makes it clear they can also scan their non-federal backbone as well if they want to. And the telecoms are outspoken supporters of CISA, so we should presume they plan to share promiscuously under this bill.
Assuming they do so, CISA offers several more improvements over FISA.
First — perhaps most important for the government — there are no pesky judges. The FISC gets a lot of shit for being a rubber stamp, but for years judges have tried to keep the government operating in the vicinity of the Fourth Amendment through its role in reviewing minimization procedures. Even John Bates, who was largely a pushover for the IC, succeeded in getting the government to agree that it can’t disseminate domestic data that it intentionally collected. And if I’m right that the FISC gave the government a cyber certificate but sharply limited how it could use that data, then it did so on precisely this issue. Significantly, CISA continues a trend we already saw in USA F-ReDux, wherein the Attorney General gets to decide whether privacy procedures (no longer named minimization procedures!) are adequate, rather than a judge. Equally significant, while CISA permits the use of CISA-collected data for a range of prosecutions, unlike FISA, it requires no notice to defendants of where the government obtained that data.
In lieu of judges, CISA envisions PCLOB and Inspectors General conducting the oversight (as well as audits being possible though not mandated). As I’ll show in a follow-up post, there are some telling things left out of those reviews. Plus, the history of DOJ’s Inspector General’s efforts to exercise oversight over such activities offers little hope these entities, no matter how well-intentioned, will be able to restrain any problematic practices. After all, DOJ’s IG called out the FBI in 2008 for not complying with a 2006 PATRIOT Act Reauthorization requirement to have minimization procedures specific to Section 215, but it took until 2013, with three years of intercession from FISC and leaks from Edward Snowden, before FBI finally complied with that 2006 mandate. And that came before FBI’s current practice of withholding data from its IG and even some information in IG reports from Congress.
In short, given what we know of the IC’s behavior when there was a judge with some leverage over its actions, there is absolutely zero reason to believe that any abuses would be stopped under a system without any judicial oversight. The Executive Branch cannot police itself.
Finally, there’s the question of what happens at DHS. No matter what you think about NSA’s minimization procedures (and they do have flaws), they do ensure that data that comes in through NSA doesn’t get broadly circulated in a way that identifies US persons. The IC has increasingly bypassed this control since 2007 by putting FBI at the front of data collection, which means data can be shared broadly even outside of the government. But FISC never permitted the IC to do this with upstream collection. So any content (metadata was different) on US persons collected under upstream collection would be subjected to minimization procedures.
This CISA model eliminates that control too. After all, CISA, as written, would let FBI and NSA veto any scrub (including of content) at DHS. And incoming data (again, probably including content) would be shared immediately not only with FBI (which has been the vehicle for sharing NSA data broadly) but also Treasury and ODNI, which are both veritable black holes from a due process perspective. And what few protections for US persons are tied to a relevance standard that would be met by virtue of a tie to that selector. Thus, CISA would permit the immediate sharing, with virtually no minimization, of US person content across the government (and from there to private sector and local governments).
I welcome corrections to this model — I presume I’ve overstated how much of an improvement over FISA this program would be. But if this analysis is correct, then CISA would give the IC everything that would have wanted for a cybersecurity certificate under Section 702, with none of the inadequate limits that would have had and may in fact have. CISA would provide an administrative way to spy on US person (domestic) content all without any judicial overview.
All of which brings me back to why the IC wants this this much. In at least one case, the IC did manage to use a combination of upstream and PRISM collection to stop an attempt to steal large amounts of data from a defense contractor. That doesn’t mean it’ll be able to do it at scale, but if by offering various kinds of immunity it can get all backbone providers to play along, it might be able to improve on that performance.
But CISA isn’t so much a cybersecurity bill as it is an Internet domestic spying bill, with permission to spy on a range of nefarious activities in cyberspace, including kiddie porn and IP theft. This bill, because it permits the spying on US person content, may be far more useful for that purpose than preventing actual hacks. That is, it won’t fix the hacking problem (it may make it worse by gutting Federal authority to regulate corporate cyber hygiene). But it will help police other kinds of activity.
If I’m right, the IC’s insistence it needs CISA — in the name of, but not necessarily intending to accomplish — cybersecurity makes more sense.
Update: This post has been tweaked for clarity.
Update, November 5: I should have written this post before I wrote this one. In it, I point to language in the August 26, 2014 Thomas Hogan opinion reflecting earlier approval, at least in the FBI minimization procedures, to share cyber signatures with private entities. The first approval was on September 20, 2012. The FISC approved the version still active in 2014 on August 30, 2013. (See footnote 19.) That certainly suggests FISC approved cyber sharing more broadly than the 2011 opinion might have suggested, though I suspect it still included more restrictions than CISA would. Moreover, if the language only got approved for the FBI minimization procedures, it would apply just to PRISM production, given that the FBI does not (or at least didn’t used to) get unminimized upstream production.