Posts

How Keith Gartenlaub Turned Child Porn into Foreign Intelligence

As I mentioned in this post on FISA and the space-time continuum, I’m going to be focusing closely on the FISA implications of Keith Gartenlaub’s child porn prosecution.

Gartenlaub was a Boeing engineer in 2013 when the FBI started investigating him for sharing information with China (see this and this story for background). He was suspected, in significant part, because of relationships and communications tied to his wife, who is a naturalized Chinese-American and whose family appears well-connected in China. The case is interesting for the way the government used both FISA and criminal searches to prosecute him for a non-national security related crime.

The case is currently being appealed to the 9th Circuit; it will be heard on December 4. His defense is challenging several things about his conviction, including that there was insufficient evidence to deem him an Agent of a Foreign Power (and therefore to obtain the ability to conduct a broader search than might be permitted under a criminal warrant), as well as that there was insufficient evidence offered at trial that he knowingly possessed the 9-year old child porn on which his conviction rests. I think there’s some merit to the latter claim, but I’m going to bracket it for my discussion, both because I think the FISA issues would remain important even if the government’s case on the child porn charge were far stronger than it is, and because I think the government may be sitting on potentially inculpatory evidence.

In this post, I’m going to show that it is almost certain that the government changed FISA minimization procedures to facilitate using FISA to prosecute him for child porn.

Timeline

The public timeline around the case looks like this (and as I said, I believe the government is hiding some bits):

Around January 28, 2013: Agent Wesley Harris reads article that leads him to start searching for Chinese spies at Boeing

February 7, 8, and 22, 2013: Harris interviews Gartenlaub

June 18, 2013: Agent Harris obtains search warrant for Gartenlaub and his wife, Tess Yi’s, Google and Yahoo accounts

Unknown date: Harris obtains a FISA order

January 29, 2014: Using FISA physical search order, FBI searches Gartenlaub’s home, images three hard drives

June 3, 2014: Harris sends files to National Center for Missing and Exploited Children, which confirms some files display known victims

August 22, 2014: Criminal search warrant obtained for Gartenlaub’s premises

August 27, 2014: FBI searches Gartenlaub’s properties, seizing computers used as evidence in trial, arrests him

August 29, 2014: Government reportedly says it will dismiss charges if Gartenlaub will cooperate on spying

October 23, 2014: Grand jury indicts

December 10, 2015: Guilty verdict

FBI used a criminal search warrant to obtain evidence, then obtained a FISA order

As you can see from the timeline, the government first obtained a criminal search warrant for access to Gartenlaub and his wife’s email accounts (Gartenlaub also got an 1806 notice, meaning they used a FISA wiretap on him at some point). Only after that did they execute a FISA physical search order to search his house and image his computers. Which means — unless they had a FISA order and a criminal warrant simultaneously — they had already convinced a judge it was likely Gartenlaub’s emails would provide evidence he was “remov[ing ] information, including export controlled technical data, from Boeing’s computer networks to China.” In his affidavit, Agent Harris cited violations of the Arms Export Control Act and Computer Fraud and Abuse Act.

Then, after probably months of reviewing emails later, having already shown probable cause that could have enabled them to get a search warrant to search Gartenlaub’s computer for those specific crimes — that is, proof that he had exploited his network access at Boeing in order to obtain data he could share with his wife’s Chinese associates — the government then went to FISA and convinced a judge they had probable cause Gartenlaub (or perhaps his wife) was acting as an agent of a foreign power for what are assumed to be the same underlying activities.

The government insists it still had adequate evidence Gartenlaub or his wife was an agent of a foreign power under FISA

The government’s response to Gartenlaub’s appeal predictably redacts much of the discussion to support its claim that it had sufficient probable cause, after months of reading his emails, to claim he or his wife was an agent of China. But the structure of it — with an unredacted paragraph addressing weaknesses with the criminal affidavit, followed by a redacted passage of unknown length, as well as a redacted footnote modifying the idea that the criminal affidavit “merely ‘recycled’ details that were found in the Harris affidavit” (see page 38-39) — suggests they raised evidence beyond what got included in the criminal affidavit. That’s surely true; it presumably explains what was so interesting about Yi’s family and associates in China as to sustain suspicion that they would be soliciting Boeing technology.

In any case, in a filing in which the government admits that “the [District] court expressed ‘some personal questions regarding the propriety of the FISA court proceeding even though that certainly seems to be legally authorized’,” the government pushed the Ninth Circuit to adopt a deferential standard on probable cause for FISA orders, in which only clear error can overturn the probable cause standard.

The Court has not previously articulated the standard of review applicable to an underlying finding of probable cause in a FISA case. In the analogous context of search warrants, this Court gives “great deference” to an issuing magistrate judge’s findings of probable cause, reviewing such findings only for “clear error.” Krupa, 658 F.3d at 1177; United States v. Hill, 459 F.3d 966, 970 (9th Cir. 2006) (same); United States v. Clark, 31 F.3d 831, 834 (9th Cir. 1994) (same). “In borderline cases, preference will be accorded to warrants and to the decision of the magistrate issuing it.” United States v. Terry, 911 F.2d 272, 275 (9th Cir. 1990). The same standard applies to this Court’s review of the findings in Title III wiretap applications. United States v. Brown, 761 F.2d 1272, 1275 (9th Cir. 2002).

Consistent with these standards and with FISA itself, the Second and Fifth Circuits have held that the “established standard of judicial review applicable to FISA warrants is deferential,” particularly given that “FISA warrant applications are subject to ‘minimal scrutiny by the courts,’ both upon initial presentation and subsequent challenge.” United States v. Abu-Jihaad, 630 F.3d 102, 130 (2d Cir. 2010); accord United States v. El-Mezain, 664 F.3d 467, 567 (5th Cir. 2011) (noting that representations and certifications in FISA application should be “presumed valid”). Other courts, reviewing district court orders de novo, have not discussed what deference applies to the FISC. See, e.g., Demeisi, 424 F.3d at 578; Squillacote, 221 F.3d at 553-54.

The government submits that the appropriate standard should be deferential. Consistent with findings of probable cause in other cases, the Court should review only for “clear error,” giving “great deference” to the initial conclusion that a FISA application established probable cause.

And, of course, the government argues that even if it didn’t meet the standards required under FISA, it still operated in good faith.

By using a FISA rather than a criminal search warrant, the FBI had more leeway to search for unrelated items

Nevertheless, having read Gartenlaub’s email for months and presumably having had the opportunity to obtain a warrant to search his computers for those specific crimes, the government instead obtained a FISA order that allowed the FBI to search his devices far more broadly, opening up decades old files named with sexually explicit names in the guise of finding intelligence on stealing Boeing’s secrets. Here’s how Gartenlaub’s lawyers describe the search in his appeal, a description the government largely endorses in their response:

The FISC can only authorize the government to search for and seize “foreign intelligence information.” 50 U.S.C. §§ 1822(b), 1823(a)(6)(A), 1824(a)(4). The order authorizing the January 2014 search of Gartenlaub’s home and computers presumably complied with this restriction. “Foreign intelligence information” (defined at 50 U.S.C. §§ 1801(e) and 1821(1)) does not include child pornography. Nonetheless, as detailed in the government’s application for the August 2014 search warrant, the agents imaged Gartenlaub’s computers in their entirety, reviewed every file, and–upon discovering that some of the files contained possible child pornography–subjected those and related files to detailed scrutiny, including sending them to the National Center for Exploited Children for analysis. ER248-56, 262-68. In an effort to establish that Gartenlaub had downloaded the child pornography, the agents also examined and analyzed a number of other files on the computers, none of which had anything to do with “foreign intelligence information.” ER255-62, 268-70.

As far as the record shows, the agents conducted this detailed, far-ranging analysis without obtaining any court authorization beyond the initial FISC order. In other words, after encountering suspected child pornography files, the agents did not stop their search and seek a warrant authorizing them to open and review those files and other potentially related files. Instead, they opened, examined, and analyzed the suspected child pornography files and a number of other files having nothing to do with foreign intelligence information. They then incorporated the results of that analysis into the August 2014 search warrant application. ER248- 49. That application, in turn, produced the warrant that gave the agents authority to search for and seize the very materials that they had already seized and searched under the purported authority of the January 2014 FISC order.

How did agents authorized to search for “foreign intelligence information” end up opening, examining, and analyzing suspected child pornography files and a number of other files that had nothing to do with the only authorized object of the search? The agents apparently relied on the following argument: To determine whether Gartenlaub’s computers contained foreign intelligence information, it was necessary to open and review every file; after all, a foreign spy might cleverly conceal such information in .jpg files with sex-themed names or in other non-obvious locations. And after opening the files, the child pornography and other information was in “plain view” and thus could be lawfully seized under the Fourth Amendment.

As a result of these broad standards, and of Gartenlaub’s habit of retaining disk drives from computers he no longer owned, the FBI found files dating back to 2005, from a computer Gartenlaub no longer owned.

Upon finding that those files included apparent child porn, the FBI sent them off to the National Center for Missing and Exploited Children, which confirmed some of the images included known victims. Almost two months later, FBI conducted further (criminal) searches, and arrested Gartenlaub for child porn.

In December 2015, Gartenlaub was found guilty on two counts of child porn, though one count was vacated by the judge after the verdict.

FBI changed standard minimization procedures to permit sharing with NCMEC

The timeline above is what would have been available to Gartenlaub’s defense team.

But in 2015 and 2017, two new details were added to the timeline.

First, on April 11, 2017, two months after Gartenlaub submitted his opening brief in the appeal on February 8, the government released an August 11, 2014 opinion approving the sharing of FISA-obtained data with NCMEC.

Congress established NCMEC in 1984 as a non-governmental organization and it is funded through grants administered by the Department of Justice. One of its purposes is to assist law enforcement in identifying victims of child pornography and other sexual crimes. Indeed, Congress has mandated Department of Justice coordination with NCMEC on these and related issues. See Mot. at 5-8. Furthermore, this Court has approved modifications to these SMPs in individual cases to permit the Government to disseminate information to NCMEC. See Docket Nos. [redacted]. Because of its unique role as a non-governmental organization with a law enforcement function, and because it will be receiving what reasonably appears to be evidence of specific types of crimes for law enforcement purposes, the Government’s amendment to the SMPs comply with FISA under Section 180l(h)(3).1

As noted, in the past the FISC had approved sharing FISA-collected data with NCMEC on a case-by-case basis. But in 2014, in the weeks while  it prepared to arrest Gartenlaub on child porn charges tied to a search that only found the child porn because it used the broader FISA search standard, the government finally made NCMEC sharing part of the standard minimization procedures.

Even on top of this coincidental timing, there are reasons to suspect DOJ codified the NCMEC sharing because of Gartenlaub’s case. For example, in the government’s response there’s a passage that clearly addresses how NCMEC got involved in the case that bridges the discussion of use of child porn evidence discovered in plain view in the criminal context and the discussion of its use here.

Non-FISA precedents also foreclose defendant’s claims. Analyzing a Rule 41 search warrant, this Court has held that using child pornography inadvertently discovered during a lawful search is consistent with the Fourth Amendment. Giberson, 527 F.3d at 889-90 (ruling that “the pornographic material [the agent] inadvertently discovered while searching for the documents enumerated in the warrant [related to document identification fraud] was properly used as a basis for the third warrant authorizing the search for child pornography”);

[additional precedents excluded]

[CLASSIFIED INFORMATION REMOVED] With the benefit of NCMEC’s assistance, the government then sought and obtained the August 2014 search warrants, authorizing the search of defendant’s residence and storage units for child pornography. (CR 73; GER 901-53). The fruits of this warrant were then used in defendant’s prosecution. The use of information discovered during the prior lawful January 2014 search in the subsequent search warrant application was proper. Giberson, 527 F.3d at 890.

The redacted discussion must include not only a description of how NCMEC was permitted to get involved, but in the approval approving this as part of the minimization procedures, which (after all) are designed to protect Americans under the Fourth Amendment.

Of particular interest, the government argued that one of the precedents Gartenlaub cited was not binding generally, and especially not binding on the FISC.

The concurring opinion in CDT, upon which defendant relies, does not aid him. That concurrence is not “binding circuit precedent” or a “constitutional requirement,” much less one binding on the FISC. Schesso, 730 F.3d at 1049 (the “search protocol” set forth in the CDT concurrence is not “binding circuit precedent,” not a[] constitutional requirement[],” and provides “no clear-cut rule”); see CDT, 621 F.3d at 1178 (observing that “[d]istrict and magistrate judges must exercise their independent judgment in every case”); Nessland, 601 Fed. Appx. at 576 (holding that “no special protocol was required” for a computer search). Defendant thus cannot demonstrate any error relating to any FISC-authorized search.

The FISC had, by the time of the search relying on the FISA-obtained child porn as evidence, already approved the use of child porn obtained in a FISA search. So the government could say the CDT case was not binding precedent, because it already had a precedent in hand from the FISC. Of course, it didn’t tell Gartenlaub that.

Of course, that’s not proof that the government codified the NCMEC sharing just for the Gartenlaub case. But there’s a lot of circumstantial evidence that that’s what happened.

The government still has not formally noticed this change to Gartenlaub

As I noted above, the government released the FISC order approving the change in the standard minimization procedures too late to be of use for Gartenlaub’s opening brief. That’s a point EFF and ACLU made in their worthwhile amicus submitted in the appeal.

For example, in this case, the government apparently refused to disclose the relevant FBI minimization procedures to Gartenlaub’s counsel even though other versions of those minimization procedures are publicly available. See Standard Minimization Procedures for FBI Electronic Surveillance and Physical Search Conducted Under FISA (2008). 8

We can debate whether the standard approval for NCMEC sharing is a good thing or whether it invites abuse, offering the FBI an opportunity to use more expansive searches to “find” evidence of child porn that it can then use as leverage in a foreign intelligence context (which I’ll return to). I suspect it is wiser to approve such sharing on a case-by-case basis, as had been the case before Gartenlaub.

But from this point forward, I would assume the FBI will routinely use this provision as an excuse to conduct particularly thorough searches for child porn, on the logic that obtaining any would provide great leverage against an intelligence target.

The timing of the approval of NCMEC sharing under Section 702

I have said repeatedly, I think the government is withholding some details.

One reason I think that is because of another remarkable coincidence of timing.

As I first reported here, the first notice that the government had approved the sharing with NCMEC in standard minimization procedures came in September 2015, when the government released the 2014 Thomas Hogan Section 702 opinion that approved such sharing under Section 702. The opinion relied on the earlier approval (by Rosemary Collyer), but redacted all reference to the timing and context of it, as well as a footnote relating to it.

I find the timing of both the release and the opinion itself to be of immense interest.

First, the government had no problem releasing this opinion back in 2015, while Gartenlaub was still awaiting trial (though it waited until almost two months after the District judge in his case, Christina Snyder, rejected his FISA challenge on August 6, 2015). So it was fine revealing to potential intelligence targets that it had standardized the approval of using FISA information to pursue child porn cases, just not revealing the dates that might have made it useful for Gartenlaub.

I’m even more interested in the timing of the order: August 26. The day before the FBI got its complaint approved and arrested Gartenlaub.

The FBI had long ago submitted FISA information to NCMEC. But it waited until both the standard minimization procedures for traditional FISA and for Section 702 had approved the sharing of data with NCMEC before they arrested Gartenlaub.

That’s one of several pieces of data that suggests they may have used Section 702 against Gartenlaub, on top of the other mix of criminal and FISA authorizations.

To be continued.

Updated timeline

Around January 28, 2013: Agent Wesley Harris reads article that leads him to start searching for Chinese spies at Boeing

February 7, 8, and 22, 2013: Harris interviews Gartenlaub

June 18, 2013: Agent Harris obtains search warrant for Gartenlaub and his wife, Tess Yi’s, Google and Yahoo accounts

Unknown date: Harris obtains a FISA order

January 29, 2014: FBI searches Gartenlaub’s home, images three hard drives

June 3, 2014: Harris sends files to National Center for Missing and Exploited Children, which confirms some files display known victims

August 11, 2014: Rosemary Collyer approves NCMEC sharing for traditional FISA standard minimization procedures

August 22, 2014: Search warrant obtained for Gartenlaub’s premises

August 26, 2014: Thomas Hogan approves NCMEC sharing for FISA 702

August 27, 2014: FBI searches Gartenlaub’s properties, seizing computers used as evidence in trial, arrests him

August 29, 2014: Government reportedly says it will dismiss charges if Gartenlaub will cooperate on spying

October 23, 2014: Grand jury indicts

August 6, 2015: Christina Snyder rejects Gartenlaub FISA challenge

September 29, 2015: ODNI releases 702 NCMEC sharing opinion

December 10, 2015: Guilty verdict

February 8, 2017: Gartenlaub submits opening brief

April 11, 2017: Government releases traditional FISA NCMEC sharing opinion

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

The Domestic Communications NSA Won’t Reveal Are Almost Certainly Obscured Location Communications

The other day, I laid out the continuing fight between Director of National Intelligence Dan Coats and Senator Ron Wyden over the former’s unwillingness to explain why he can’t answer the question, “Can the government use FISA Act Section 702 to collect communications it knows are entirely domestic?” in unclassified form. As I noted, Coats is parsing the difference between “intentionally acquir[ing] any communication as to which the sender and all intended recipients are known at the time of acquisition to be located in the United States,” which Section 702 prohibits, and “collect[ing] communications [the government] knows are entirely domestic,” which this exchange and Wyden’s long history of calling out such things clearly indicates the government does.

As I noted, the earlier iteration of this debate took place in early June. Since then, we’ve gotten two sets of documents that all but prove that the entirely domestic communication the NSA refuses to tell us about involves communications that obscure their location, probably via Tor or VPNs.

Most Entirely Domestic Communications Collected Via Upstream Surveillance in 2011 Obscured Their Location

The first set of documents are those on the 2011 discussion about upstream collection liberated just recently by Charlie Savage. They show that in the September 7, 2011 hearing, John Bates told the government that he believed the collection of discrete communications the government had not examined in their sampling might also contain “about” communications that were entirely domestic. (PDF 113)

We also have this other category, in your random sampling, again, that is 9/10ths of the random sampling that was set aside as being discrete communications — 45,000 out of the 50,0000 — as to which our questioning has indicataed we have a concern that some of the about communications may actually have wholly domestic communications.

And I don’t think that you’ve really assessed that, either theoretically or by any actual examination of those particular transactions or communications. And I’m not indicating to you what I expect you to do, but I do have this concern that there are a fair number of wholly domestic communications in that category, and there’s nothing–you really haven’t had an opportunity to address that, but there’s nothing that has been said to date that would dissuade me from that conclusion. So I’m looking there for some convincing, if you will, assessment of why there are not wholly domestic communications with that body which is 9/10s of the random sample.

In a filing submitted two days later, the government tried to explain away the possibility this would include (many) domestic communications. (The discussion responding to this question starts at PDF 120.) First, the NSA used technical means to determine that 41,272 of the 45,359 communications in the sample were not entirely domestic. That left 4,087 communications, which the NSA was able to analyze in just 48 hours. Of those, the NSA found just 25 that were not to or from a tasked selector (meaning they were “abouts” or correlated identities, described as “potentially alternate accounts/addresses/identifiers for current NSA targets” in footnote 7, which may be the first public confirmation that NSA collects on correlated identifiers). NSA then did the same kind of analysis it does on the communications that it does as part of its pre-tasking determination that a target is located outside the US. This focused entirely on location data.

Notably, none of the reviewed transactions featured an account/address/identifier that resolved to the United States. Further, each of the 25 communications contained location information for at least one account/address/identifier such that NSA’s analysts were able assess [sic] that at least one communicant for each of these 25 communications was located outside of the United States. (PDF 121)

Note that the government here (finally) drops the charade that these are simply emails, discussing three kinds of collection: accounts (which could be both email and messenger accounts), addresses (which having excluded accounts would significantly include IP addresses), and identifiers. And they say that having identified an overseas location for the communication, NSA treats it as an overseas communication.

The next paragraph is even more remarkable. Rather than doing more analysis on those just 25 communications it effectively argues that because latency is bad, it’s safe to assume that any service that is available entirely within the US will be delivered to an American entirely within the US, and so those 25 communications must not be American.

Given the United States’ status as the “world’s premier electronic communications hub,” and further based on NSA’s knowledge of Internet routing patterns, the Government has already asserted that “the vast majority of communications between persons located in the United States are not routed through servers outside the United Staes.” See the Government’s June 1, 2011 Submission at 11. As a practical matter, it is a common business practice for Internet and web service providers alike to attempt to deliver their customers the best user experience possible by reducing latency and increasing capacity. Latency is determined in part by the geographical distance between the user and the server, thus, providers frequently host their services on servers close to their users, and users are frequently directed to the servers closest to them. While such practices are not absolute in any respect and are wholly contingent on potentially dynamic practices of particular service providers and users,9 if all parties to a communication are located in the United States and the required services are available in the United States, in most instances those communications will be routed by service providers through infrastructure wholly within the United States.

Amid a bunch of redactions (including footnote 9, which is around 16 lines long and entirely redacted), the government then claims that its IP filters would ensure that it wouldn’t pick up any of the entirely domestic exceptions to what I’ll call its “avoidance of latency” assumption and so these 25 communications are no biggie, from a Fourth Amendment perspective.

Of course, the entirety of this unredacted discussion presumes that all consumers will be working with providers whose goal is to avoid latency. None of the unredacted discussion admits that some consumers choose to accept some latency in order to obscure their location by routing it through one (VPN) or multiple (Tor) servers distant from their location, including servers located overseas.

For what it’s worth, I think the estimate Bates did on his own to come up with a number of these SCTs was high, in 2011. He guessed there would be 46,000 entirely domestic communications collected each year; by my admittedly rusty math, it appears it would be closer to 12,000 (25 / 50,000 comms in the sample = .05% of the total; .05% of the 11,925,000 upstream transactions in that 6 month period = 5,962, times 2 = roughly 12,000 a year). Still, it was a bigger part of the entirely domestic upstream collection than those collected as MCTs, and all those entirely domestic communications have been improperly back door searched in the interim.

Collyer claims to have ended “about” collection but admits upstream will still collect entirely domestic communications

Now, if that analysis done in 2011 were applicable to today’s collection, there shouldn’t be a way for the NSA to collect entirely domestic communications today. That’s because all of those 25 potentially domestic comms were described as “about” collection. Rosemary Collyer has, according to her IMO apparently imperfect understanding of upstream collection, shut down “about” collection. So that should have eliminated the possibility for entirely domestic collection via upstream, right?

Nope.

As she admits in her opinion, it will still be possible for the NSA to “acquire an MCT” (that is, bundled collection) “that contains a domestic communication.”

So there must be something that has changed since 2011 that would lead NSA to collect entirely domestic communications even if that communication didn’t include an “about” selector.

In 2014 Collyer enforced a practice that would expose Americans to 702 collection

Which brings me back to the practice approved in 2014 in which, according to providers newly targeted under the practice, “the communications of U.S. person will be collected as part of such surveillance.”

As I laid out in this post, in 2014 Thomas Hogan approved a change in the targeting procedures. Previously, all users of a targeted facility had to be foreign for it to qualify as a foreign target. But for some “limited” exception, Hogan for the first time permitted the NSA to collect on a facility even if Americans used that facility as well, along with the foreign targets.

The first revision to the NSA Targeting Procedures concerns who will be regarded as a “target” of acquisition or a “user” of a tasked facility for purposes of those procedures. As a general rule, and without exception under the NSA targeting procedures now in effect, any user of a tasked facility is regarded as a person targeted for acquisition. This approach has sometimes resulted in NSA’ s becoming obligated to detask a selector when it learns that [redacted]

The relevant revision would permit continued acquisition for such a facility.

It appears that Hogan agreed it would be adequate to weed out American communications after collection in post-task analysis.

Some months after this change, some providers got some directives (apparently spanning all three known certificates), and challenged them, though of course Collyer didn’t permit them to read the Hogan opinion approving the change.

Here’s some of what Collyer’s opinion enforcing the directives revealed about the practice.

Collyer’s opinion includes more of the provider’s arguments than the Reply did. It describes the Directives as involving “surveillance conducted on the servers of a U.S.-based provider” in which “the communications of U.S. person will be collected as part of such surveillance.” (29) It says [in Collyer’s words] that the provider “believes that the government will unreasonably intrude on the privacy interests of United States persons and persons in the United States [redacted] because the government will regularly acquire, store, and use their private communications and related information without a foreign intelligence or law enforcement justification.” (32-3) It notes that the provider argued there would be “a heightened risk of error” in tasking its customers. (12) The provider argued something about the targeting and minimization procedures “render[ed] the directives invalid as applied to its service.” (16) The provider also raised concerns that because the NSA “minimization procedures [] do not require the government to immediately delete such information[, they] do not adequately protect United States person.” (26)

[snip]

Collyer, too, says a few interesting things about the proposed surveillance. For example, she refers to a selector as an “electronic communications account” as distinct from an email — a rare public admission from the FISC that 702 targets things beyond just emails. And she treats these Directives as an “expansion of 702 acquisitions” to some new provider or technology.

Now, there’s no reason to believe this provider was involved in upstream collection. Clearly, they’re being asked to provide data from their own servers, not from the telecom backbone (in fact, I wonder whether this new practice is why NSA has renamed “PRISM” “downstream” collection).

But we know two things. First: the discrete domestic communications that got sucked up in upstream collection in 2011 appear to have obscured their location. And, there is now a means of collecting bundles of communications via upstream collection (assuming Collyer’s use of MCT here is correct, which it might not be) such that even communications involving no “about” collection would be swept up.

Again, the evidence is still circumstantial, but there is increasing evidence that in 2014 the NSA got approval to collect on servers that obscure location, and that that is the remaining kind of collection (which might exist under both upstream and downstream collection) that will knowingly be swept up under Section 702. That’s the collection, it seems likely, that Coats doesn’t want to admit.

The problems with permitting collection on location-obscured Americans

If I’m right about this, then there are three really big problems with this practice.

First, in 2011, location-obscuring servers would not themselves be targeted. Communications using such servers would only be collected (if the NSA’s response to Bates is to be believed) if they included an “about’ selector.

But it appears there is now some collection that specifically targets those location-obscuring servers, and knowingly collects US person communications along with whatever else the government is after. If that’s right, then it will affect far more than just 12,000 people a year.

That’s especially true given that a lot more people are using location-obscuring servers now than on October 3, 2011, when Bates issued his opinion. Tor usage in the US has gone from around 150,000 mean users a day to around 430,000 users.

And that’s just Tor. While fewer VPN users will consistently use overseas servers, sometimes it will happen for efficacy reasons and sometimes it will happen to access content that is unavailable in the US (like decent Olympics coverage).

In neither of Collyer’s opinions did she ask for the kind of numerical counts of people affected that Bates asked for in 2011. If 430,000 Americans a day are being exposed to this collection under the 2014 change, it represents a far bigger problem than the one Bates called a Fourth Amendment violation in 2011.

Finally, and perhaps most importantly, Collyer newly permitted back door searches on upstream collection, even though she knew that (for some reason) it would still collect US person communications. So not only could the NSA collect and hold location obscured US person communications, but those communications might be accessed (if they’re not encrypted) via back door searches that (with Attorney General approval) don’t require a FISA order (though Americans back door searched by NSA are often covered by FISA orders).

In other words, if I’m right about this, the NSA can use 702 to collect on Americans. And the NSA will be permitted to keep what they find (on a communication by communication basis) if they fall under four exceptions to the destruction requirement.

The government is, once again, fighting Congressional efforts to provide a count of how many Americans are getting sucked up in 702 (even though the documents liberated by Savage reveal that such a count wouldn’t take as long as the government keeps claiming). If any of this speculation is correct, it would explain the reluctance. Because once the NSA admits how much US person data it is collecting, it becomes illegal under John Bates’ 2010 PRTT order.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

Did NSA Start Using Section 702 to Collect from VPNs in 2014?

I’ve finally finished reading the set of 702 documents I Con the Record dumped a few weeks back. I did two posts on the dump and a related document Charlie Savage liberated. Both pertain, generally, to whether a 702 “selector” gets defined in a way that permits US person data to be sucked up as well. The first post reveals that, in 2010, the government tried to define a specific target under 702 (both AQAP and WikiLeaks might make sense given the timing) as including US persons. John Bates asked for legal justification for that, and the government withdrew its request.

The second reveals that, in 2011, as Bates was working through the mess of upstream surveillance, he asked whether the definition of “active user,” as it applies for a multiple communication transaction, referred to the individual user. The question is important because if a facility is defined to be used by a group — say, Al Qaeda or Wikileaks — it’s possible a user of that facility might be an unknown US person user, the communications of which would only be segregated under the new minimization procedures if the individual user’s communication were reviewed (not that it mattered in the end; NSA doesn’t appear to have implemented the segregation regime in meaningful fashion). Bates never got a public answer to that question, which is one of a number of reasons why Rosemary Collyer’s April 26 702 opinion may not solve the problem of upstream collection, especially not with back door searches permitted.

As it happens, some of the most important documents released in the dump may pertain to a closely related issue: whether the government can collect on selectors it knows may be used by US persons, only to weed out the US persons after the fact.

In 2014, a provider challenged orders (individual “Directives” listing account identifiers NSA wanted to collect) that it said would amount to conducting surveillance “on the servers of a U.S.-based provider” in which “the communications of U.S. persons will be collected as part of such surveillance.” The provider was prohibited from reading the opinions that set the precedent permitting this kind of collection. Unsurprisingly, the provider lost its challenge, so we should assume that some 702 collection collects US person communications, using the post-tasking process rather than pre-targeting intelligence to protect American privacy.

The documents

The documents that lay out the failed challenge are:

2014, redacted date: ACLU Document 420: The government response to the provider’s filing supporting its demand that FISC mandate compliance.

2014, redacted date: EFF Document 13: The provider(s) challenging the Directives asked for access to two opinions the government relied on in their argument. Rosemary Collyer refused to provide them, though they have since been released.

2014, redacted date: EFF Document 6 (ACLU 510): Unsurprisingly, Collyer also rejected the challenge to the individual Directives, finding that post-tasking analysis could adequately protect Americans.

The two opinions the providers requested, but were refused, are:

September 4, 2008 opinion: This opinion, by Mary McLaughlin, was the first approval of FAA certifications after passage of the law. It lays out many of the initial standards that would be used with FAA (which changed slightly from PAA). As part of that, McLaughin adopted standards regarding what kinds of US person collection would be subject to the minimization procedures.

August 26, 2014 opinion: This opinion, by Thomas Hogan, approved the certificates under which the providers had received Directives (which means the challenge took place between August and the end of 2014). But the government also probably relied on this opinion for a change Hogan had just approved, permitting NSA to remain tasked on a selector even if US persons also used the selector.

The argument also relies on the October 3, 2011 John Bates FAA opinion and the August 22, 2008 FISCR opinion denying Yahoo’s challenge to Protect America Act. The latter was released in a second, less redacted form on September 11, 2014, which means the challenge likely post-dated that release.

The government’s response

The government’s response consists of a filing by Stuart Evans (who has become DOJ’s go-to 702 hawk) as well as a declaration submitted by someone in NSA that had already reviewed some of the taskings done under the 2014 certificates (which again suggests this challenge must date to September at the earliest). There appear to be four sections to Evans’ response. Of those sections, the only one left substantially unredacted — as well as the bulk of the SIGINT declaration — pertains to the Targeting Procedures. So while targeting isn’t the only thing the provider challenged (another appears to be certification of foreign intelligence value), it appears to be the primary thing.

Much of what is unredacted reviews the public details of NSA’s targeting procedure. Analysts have to use the totality of circumstances to figure out whether someone is a non US person located overseas likely to have foreign intelligence value, relying on things like other SIGINT, HUMINT, and (though the opinion redacts this) geolocation information and/or filters to weed out known US IPs. After a facility has been targeted, the analyst is required to do post-task analysis, both to make sure that the selector is the one intended, but also to make sure that no new information identifies the selector as being used by a US person, as well as making sure that the target hasn’t “roamed” into the US. Post-task analysis also ensures that the selector really is providing foreign intelligence information (though in practice, per PCLOB and other sources, this is not closely reviewed).

Of particular importance, Evans dismisses concerns about what happens when a selector gets incorrectly tasked as a foreigner. “That such a determination may later prove to be incorrect because of changes in circumstances or information of which the government was unaware does not render unreasonable either the initial targeting determination or the procedures used to reach it.”

Evans also dismisses the concern that minimization procedures don’t protect the providers’ customers (presumably because they provide four ways US person content may be retained with DIRNSA approval). Relying on the 2008 opinion that states in part…

The government argues that, by its terms, Section 1806(i) applies only to a communication that is unintentionally acquired,” not to a communication that is intentionally acquired under a mistaken belief about the location or non-U.S. person status of the target or the location of the parties to the communication. See Government’s filing of August 28, 2008. The Court finds this analysis of Section 1806(i) persuasive, and on this basis concludes that Section 1806(i) does not require the destruction of the types of communications that are addressed by the special retention provisions.”

Evans then quotes McClaughlin judging that minimization procedures “constitute a safeguard against improper use of information about U.S. persons that is inadvertently or incidentally acquired.” In other words, he cites an opinion that permits the government to treat stuff that is initially targeted, even if it is later discovered to be an American’s communication, differently than it does other US person information as proof the minimization procedures are adequate.

The missing 2014 opinion references

As noted above, the provider challenging these Directives asked for both the 2008 opinion (cited liberally throughout the unredacted discussion in the government’s reply) and the 2014 one, which barely appears at all beyond the initial citation.  Given that Collyer reviewed substantial language from both opinions in denying the provider’s request to obtain them, the discussion must go beyond simply noting that the 2014 opinion governs the Directives in question. There must be something in the 2014 opinion, probably the targeting procedures, that gets cited in the vast swaths of redactions.

That’s especially true given that on the first page of Evans’ response claims the Directives address “a critical, ongoing foreign intelligence gap.” So it makes sense that the government would get some new practice approved in that year’s certification process, then serve Directives ostensibly authorized by the new certificate, only to have a provider challenge a new type of request and/or a new kind of provider challenge their first Directives.

One thing stands out in the 2014 opinion that might indicate the closing of a foreign intelligence gap.

Prior to 2014, the NSA could say an entity — say, Al Qaeda — used a facility, meaning they’d suck up any people that used that facility (think how useful it would be to declare a chat room a facility, for example). But (again, prior to 2014) as soon as a US person started “using” that facility — the word use here is squishy as someone talking to the target would not count as “using” it, but as incidental collection — then NSA would have to detask.

The 2014 certifications for the first time changed that.

The first revision to the NSA Targeting Procedures concerns who will be regarded as a “target” of acquisition or a “user” of a tasked facility for purposes of those procedures. As a general rule, and without exception under the NSA targeting procedures now in effect, any user of a tasked facility is regarded as a person targeted for acquisition. This approach has sometimes resulted in NSA’ s becoming obligated to detask a selector when it learns that [redacted]

The relevant revision would permit continued acquisition for such a facility.

[snip]

For purposes of electronic surveillance conducted under 50 U.S.C. §§ 1804-1805, the “target” of the surveillance ‘”is the individual or entity … about whom or from whom information is sought.”‘ In re Sealed Case, 310 F.3d 717, 740 (FISA Ct. Rev. 2002) (quoting H.R. Rep. 95-1283, at 73 (1978)). As the FISC has previously observed, “[t]here is no reason to think that a different meaning should apply” under Section 702. September 4, 2008 Memorandum Opinion at 18 n.16. It is evident that the Section 702 collection on a particular facility does not seek information from or about [redacted].

In other words, for the first time in 2014, the FISC bought off on letting the NSA target “facilities” that were used by a target as well as possibly innocent Americans, based on the assumption that the NSA would weed out the Americans in the post-tasking process, and anyway, Hogan figured, the NSA was unlikely to read that US person data because that’s not what they were interested in anyway.

Mind you, in his opinion approving the practice, Hogan included a bunch of mostly redacted language pretending to narrow the application of this language.

This amended provision might be read literally to apply where [redacted]

But those circumstances fall outside the accepted rationale for this amendment. The provision should be understood to apply only where [redacted]

But Hogan appears to be policing this limiting language by relying on the “rationale” of the approval, not any legal distinction.

The description of this change to tasking also appears in a 3.5 page discussion as the first item in the tasking discussion in the government’s 2014 application, which Collyer would attach to her opinion.

Collyer’s opinion

Collyer’s opinion includes more of the provider’s arguments than the Reply did. It describes the Directives as involving “surveillance conducted on the servers of a U.S.-based provider” in which “the communications of U.S. person will be collected as part of such surveillance.” (29) It says [in Collyer’s words] that the provider “believes that the government will unreasonably intrude on the privacy interests of United States persons and persons in the United States [redacted] because the government will regularly acquire, store, and use their private communications and related information without a foreign intelligence or law enforcement justification.” (32-3) It notes that the provider argued there would be “a heightened risk of error” in tasking its customers. (12) The provider argued something about the targeting and minimization procedures “render[ed] the directives invalid as applied to its service.” (16) The provider also raised concerns that because the NSA “minimization procedures [] do not require the government to immediately delete such information[, they] do not adequately protect United States person.” (26)

All of which suggests the provider believed that significant US person data would be collected off their servers without any requirement the US person data get deleted right away. And something about this provider’s customers put them at heightened risk of such collection, beyond (for example) regular upstream surveillance, which was already public by the time of this challenge.

Collyer, too, says a few interesting things about the proposed surveillance. For example, she refers to a selector as an “electronic communications account” as distinct from an email — a rare public admission from the FISC that 702 targets things beyond just emails. And she treats these Directives as an “expansion of 702 acquisitions” to some new provider or technology. Finally, Collyer explains that “the 2014 Directives are identical, except for each directive referencing the particular certification under which the directive is issued.” This means that the provider received more than one Directive, and they fall under more than one certificate, which means that the collection is being used for more than one kind of use (counterterrorism, counterproliferation, and foreign government plus cyber). So the provider is used by some combination of terrorists, proliferators, spies, or hackers.

Ultimately, though, Collyer rejected the challenge, finding the targeting and minimization procedures to be adequate protection of the US person data collected via this new approach.

Now, it is not certain that all this relied on the new targeting procedure. Little in Collyer’s language reflects passing familiarity with that new provision. Indeed, at one point she described the risk to US persons to involve “the government may mistakenly task the wrong account,” which suggests a more individualized impact.

Except that after her almost five pages entirely redacted of discussion of the provider’s claim that the targeting procedures are insufficient, Collyer argues that such issues don’t arise that frequently, and even if they do, they’d be dealt with in post-targeting analysis.

The Court is not convinced that [redacted] under any of the above-described circumstances occurs frequently, or even on a regular basis. Assuming arguendo that such scenarios will nonetheless occur with regard to selectors tasked under the 2014 Directives, the targeting procedures address each of the scenarios by requiring NSA to conduct post-targeting analysis [redacted]

Similarly, Collyer dismissed the likelihood that Americans’ data would be tasked that often.

[O]ne would not expect a large number of communications acquired under such circumstances to involve United States person [citation to a redacted footnote omitted]. Moreover, a substantial proportion of the United States person communications acquired under such circumstances are likely to be of foreign intelligence value.

As she did in her recent shitty opinion, Collyer appears to have made these determinations without requiring NSA to provide real numbers on past frequency or likely future frequency.

However often such collection had happened in the past (which she didn’t ask the NSA to explain) or would happen as this new provider started responding to Directives, this language does sound like it might implicate the new case of a selector that might be used both by legitimate foreign intelligence targets and by innocent Americans.

Does the government use 702 collection to obtain VPN traffic?

As I noted, it seems likely, though not certain, that the new collection exploited the new permission to keep tasking a selector even if US persons were using it, in addition to the actual foreigners targeted. I’m still trying to puzzle this through, but I’m wondering if the provider was a VPN provider, being asked to hand over data as it passed through the VPN server. (I think the application approved in 2014 would implicate Tor traffic as well, but I can’t see how a Tor provider would challenge the Directives, unless it was Nick Merrill again; in any case, there’d be no discussion of an “account” with Tor in the way Collyer uses it).

What does this mean for upstream surveillance

In any case, whether my guesstimates about what this is are correct, the description of the 2014 change and the discussion about the challenge would seem to raise very important questions given Collyer’s recent decision to expand the searching of upstream collection. While the description of collection from a provider’s server is not upstream, it would seem to raise the same problems, the collection of a great deal of associated US person collection that could later be brought up in a search. There’s no hint in any of the public opinions that such problems were considered.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

The Problems with Rosemary Collyer’s Shitty Upstream 702 Opinion

This post took a great deal of time, both in this go-around, and over the years to read all of these opinions carefully. Please consider donating to support this work. 

It often surprises people when I tell them this, but in general, I’ve got a much better opinion of the FISA Court than most other civil libertarians. I do so because I’ve actually read the opinions. And while there are some real stinkers in the bunch, I recognize that the court has long been a source of some control over the executive branch, at times even applying more stringent standards than criminal courts.

But Rosemary Collyer’s April 26, 2017 opinion approving new Section 702 certificates undermines all the trust and regard I have for the FISA Court. It embodies everything that can go wrong with the court — which is all the more inexcusable given efforts to improve the court’s transparency and process since the Snowden leaks. I don’t think she understood what she was ruling on. And when faced with evidence of years of abuse (and the government’s attempt to hide it), she did little to rein in or even ensure accountability for those abuses.

This post is divided into three sections:

  • My analysis of the aspects of the opinion that deal with the upstream surveillance
    • Describing upstream searches
    • Refusing to count the impact
    • Treating the problem as exclusively about MCTs, not SCTs
    • Defining key terms
    • Failing to appoint (much less consider) appointing an amicus
    • Approving back door upstream searches
    • Imposing no consequences
  • A description of all the documents I Con the Record released — and more importantly, the more important ones it did not release (if you’re in the mood for weeds, start there)
  • A timeline showing how NSA tried to hide these violations from FISC

Opinion

The Collyer opinion deals with a range of issues: an expansion of data sharing with the National Counterterrorism Center, the resolution of past abuses, and the rote approval of 702 certificates for form and content.

But the big news from the opinion is that the NSA discovered it had been violating the terms of upstream FISA collection set in 2011 (after violating the terms of upstream FISA set in 2007-2008, terms which were set after Stellar Wind violated FISA since 2002). After five months of trying and failing to find an adequate solution to fix the problem, NSA proposed and Collyer approved new rules for upstream collection. The collection conducted under FISA Section 702 is narrower than it had been because NSA can no longer do “about” searches (which are basically searching for some signature in the “content” of a communication). But it is broader — and still potentially problematic — because NSA now has permission to do the back door searches of upstream collected data that they had, in reality, been doing all along.

My analysis here will focus on the issue of upstream collection, because that is what matters going forward, though I will note problems with the opinion addressing other topics to the extent they support my larger point.

Describing upstream searches

Upstream collection under Section 702 is the collection of communications identified by packet sniffing for a selector at telecommunication switches. As an example, if the NSA wants to collect the communications of someone who doesn’t use Google or Yahoo, they will search for the email address as it passes across circuits the government has access to (overseas, under EO 12333) or that a US telecommunications company runs (domestically, under 702; note many of the data centers at which this occurs have recently changed hands). Stellar Wind — the illegal warrantless wiretap program done under Bush — was upstream surveillance. The period in 2007 when the government tried to replace Stellar Wind under traditional FISA was upstream surveillance. And the Protect America Act and FISA Amendments Act have always included upstream surveillance as part of the mix, even as they moved more (roughly 90% according to a 2011 estimate) of the collection to US-based providers.

The thing is, there’s no reason to believe NSA has ever fully explained how upstream surveillance works to the FISC, not even in this most recent go-around (and it’s now clear that they always lied about how they were using and processing a form of upstream collection to get Internet metadata from 2004 to 2011). Perhaps ironically, the most detailed discussions of the technology behind it likely occurred in 2004 and 2010 in advance of opinions authorizing collection of metadata, not content, but NSA was definitely not fully forthcoming in those discussions about how it processed upstream data.

In 2011, the NSA explained (for the first time), that it was not just collecting communications by searching for a selector in metadata, but it was also collecting communications that included a selector as content. One reason they might do this is to obtain forwarded emails involving a target, but there are clearly other reasons. As a result of looking for selectors as content, NSA got a lot of entirely domestic communications, both in what NSA called multiple communication transactions (“MCTs,” basically emails and other things sent in bundles) and in single communication transactions (SCTs) that NSA didn’t identify as domestic, perhaps because they used Tor or a VPN or were routed overseas for some other reason. The presiding judge in 2011, John Bates, ruled that the bundled stuff violated the Fourth Amendment and imposed new protections — including the requirement NSA segregate that data — for some of the MCTs. Bizarrely, he did not rule the domestic SCTs problematic, on the logic that those entirely domestic communications might have foreign intelligence value.

In the same order, John Bates for the first time let CIA and NSA do something FBI had already been doing: taking US person selectors (like an email address) and searching through already collected content to see what communications they were involved in (this was partly a response to the 2009 Nidal Hasan attack, which FBI didn’t prevent in part because they were never able to pull up all of Hasan’s communications with Anwar al-Awlaki at once). Following Ron Wyden’s lead, these searches on US person content are often called “back door searches” for the way they let the government read Americans’ communications without a warrant. Because of the newly disclosed risk that upstream collection could pick up domestic communications, however, when Bates approved back door searches in 2011, he explicitly prohibited the back door searching of data collected via upstream searches. He prohibited this for all of it — MCTs (many of which were segregated from general repositories) and SCTs (none of which were segregated).

As I’ve noted, as early as 2013, NSA knew it was conducting “many” back door searches of upstream data. The reasons why it was doing so were stupid: in part, because to avoid upstream searches analysts had to exclude upstream repositories from the search query (basically by writing “NOT upstream” in a Boolean query), which also required them realizing they were searching on a US person selector. For whatever reason, though, no one got alarmed by reports this was going on — not NSA’s overseers, not FISC (which reportedly got notices of these searches), and not Congress (which got notices of them in Semiannual reports, which is how I knew they were going on). So the problem continued; I noted that this was a persistent problem back in August, when NSA and DOJ were still hiding the extent of the problems from FISC.

It became clear the problem was far worse than known, however, when NSA started looking into how it dealt with 704 surveillance. Section 704 is the authority the NSA uses to spy on Americans who are overseas. It basically amounts to getting a FISC order to use EO 12333 spying on an American. An IG Report completed in January 2016 generally found 704 surveillance to be a clusterfuck; as part of that, though, the NSA discovered that there were a whole bunch of 704 backdoor searches that weren’t following the rules, in part because they were collecting US person communications for periods outside of the period when the FISC had authorized surveillance (for 705(b) communication, which is the spying on Americans who are simply traveling overseas, this might mean NSA used EO 12333 to collect on an American when they were in the US). Then NSA’s Compliance people (OCO) did some more checking and found still worse problems.

And then the government — the same government that boasted about properly disclosing this to FISC — tried to bury it, basically not even telling FISC about how bad the problem was until days before Collyer was set to approve new certificates in October 2016. Once they did disclose it, Judge Collyer gave NSA first one and then another extension for them to figure out what went wrong. After 5 months of figuring, they were still having problems nailing it down or even finding where the data and searches had occurred. So, finally, facing a choice of ending “about” collection (only under 702 — they can still accomplish the very same thing under EO 12333) or ending searches of upstream data, they chose the former option, which Collyer approved with almost no accountability for all the problems she saw in the process.

Refusing to count the impact

I believe that (at least given what has been made public) Collyer didn’t really understand the issue placed before her. One thing she does is just operate on assumptions about the impact of certain practices. For example, she uses the 2011 number for the volume of total 702 collection accomplished using upstream collection to claim that it is “a small percentage of NSA’s overall collection of Internet communications under Section 702.” That’s likely still true, but she provides no basis for the claim, and it’s possible changes in communication — such as the increased popularity of Twitter — would change the mix significantly.

Similarly, she assumes that MCTs that involve “a non-U.S. person outside the United States” will be “for that reason [] less likely to contain a large volume of information about U.S. person or domestic communications.” She makes a similar assumption (this time in her treatment of the new NCTC raw take) about 702 data being less intrusive than individual orders targeted at someone in the US, “which often involve targets who are United States persons and typically are directed at persons in the United States.” In both of these, she repeats an assumption John Bates made in 2011 when he first approved back door searches using the same logic — that it was okay to provide raw access to this data, collected without a warrant, because it wouldn’t be as impactful as the data collected with an individual order. And the assumption may be true in both cases. But in an age of increasingly global data flows, that remains unproven. Certainly, with ISIS recruiters located in Syria attempting to recruit Americans, that would not be true at all.

Collyer makes the same move when she makes a critical move in the opinion, when she asserts that “NSA’s elimination of ‘abouts’ collection should reduce the number of communications acquired under Section 702 to which a U.S. person or a person in the United States is a party.” Again, that’s probably true, but it is not clear she has investigated all the possible ways Americans will still be sucked up (which she acknowledges will happen).

And she does this even as NSA was providing her unreliable numbers.

The government later reported that it had inadvertently misstated the percentage of NSA’s overall upstream Internet collection during the relevant period that could have been affected by this [misidentification of MCTs] error (the government first reported the percentage as roughly 1.3% when it was roughly 3.7%.

Collyer’s reliance on assumptions rather than real numbers is all the more unforgivable given one of the changes she approved with this order: basically, permitting the the agencies to conduct otherwise impermissible searches to be able to count how many Americans get sucked up under 702.  In other words, she was told, at length, that Congress wants this number (the government’s application even cites the April 22, 2106 letter from members of the House Judiciary Committee asking for such a number). Moreover, she was told that NSA had already started trying to do such counts.

The government has since [that is, sometime between September 26 and April 26] orally notified the Court that, in order to respond to these requests and in reliance on this provision of its minimization procedures, NSA has made some otherwise-noncompliant queries of data acquired under Section 702 by means other than upstream Internet collection.

And yet she doesn’t then demand real numbers herself (again, in 2011, Bates got NSA to do at least a limited count of the impact of the upstream problems).

Treating the problem as exclusively about MCTs, not SCTs

But the bigger problem with Collyer’s discussion is that she treats all of the problem of upstream collection as being about MCTs, not SCTs. This is true in general — the term single communication transaction or SCT doesn’t appear at all in the opinion. But she also, at times, makes claims about MCTs that are more generally true for SCTs. For example, she cites one aspect of NSA’s minimization procedures that applies generally to all upstream collection, but describes it as only applying to MCTs.

A shorter retention period was also put into place, whereby an MCT of any type could not be retained longer than two years after the expiration of the certificate pursuant to which it was acquired, unless applicable criteria were met. And, of greatest relevance to the present discussion, those procedures categorically prohibited NSA analysts from using known U.S.-person identifiers to query the results of upstream Internet collection. (17-18)

Here’s the section of the minimization procedures that imposed the two year retention deadline, which is an entirely different section than that describing the special handling for MCTs.

Similarly, Collyer cites a passage from the 2015 Hogan opinion stating that upstream “is more likely than other forms of section 702 collection to contain information of or concerning United States person with no foreign intelligence value” (see page 17). But that passage cites to a passage of the 2011 Bates opinion that includes SCTs in its discussion, as in this sentence.

In addition to these MCTs, NSA likely acquires tens of thousands more wholly domestic communications every year, given that NSA’s upstream collection devices will acquire a wholly domestic “about” SCT if it is routed internationally. (33)

Collyer’s failure to address SCTs is problematic because — as I explain here — the bulk of the searches implicating US persons almost certainly searched SCTs, not MCTs. That’s true for two reasons. First, because (at least according to Bates’ 2011 guesstimate) NSA collects (or collected) far more entirely domestic communications via SCTs than via MCTs. Here’s how Bates made that calculation in 2011 (see footnote 32).

NSA ultimately did not provide the Court with an estimate of the number of wholly domestic “about” SCTs that may be acquired through its upstream collection. Instead, NSA has concluded that “the probability of encountering wholly domestic communications in transactions that feature only a single, discrete communication should be smaller — and certainly no greater — than potentially encountering wholly domestic communications within MCTs.” Sept. 13 Submission at 2.

The Court understands this to mean that the percentage of wholly domestic communications within the universe of SCTs acquired through NSA’s upstream collection should not exceed the percentage of MCTs within its statistical sample. Since NSA found 10 MCTs with wholly domestic communications within the 5,081 MCTs reviewed, the relevant percentage is .197% (10/5,081). Aug. 16 Submission at 5.

NSA’s manual review found that approximately 90% of the 50,440 transactions in the same were SCTs. Id. at 3. Ninety percent of the approximately 13.25 million total Internet transactions acquired by NSA through its upstream collection during the six-month period, works out to be approximately 11,925,000 transactions. Those 11,925,000 transactions would constitute the universe of SCTs acquired during the six-month period, and .197% of that universe would be approximately 23,000 wholly domestic SCTs. Thus, NSA may be acquiring as many as 46,000 wholly domestic “about” SCTs each year, in addition to the 2,000-10,000 MCTs referenced above.

Assuming some of this happens because people use VPNs or Tor, then the amount of entirely domestic communications collected via upstream would presumably have increased significantly in the interim period. Indeed, the redaction in this passage likely hides a reference to technologies that obscure location.

If so, it would seem to acknowledge NSA collects entirely domestic communications using upstream that obscure their location.

The other reason the problem is likely worse with SCTs is because — as I noted above — no SCTs were segregated from NSA’s general repositories, whereas some MCTs were supposed to be (and in any case, in 2011 the SCTs constituted by far the bulk of upstream collection).

Now, Collyer’s failure to deal with SCTs may or may not matter for her ultimate analysis that upstream collection without “about” collection solves the problem. Collyer limits the collection of abouts by limiting upstream collection to communications where “the active user is the target of acquisition.” She describes “active user” as “the user of a communication service to or from whom the MCT is in transit when it is acquired (e.g., the user of an e-mail account [half line redacted].” If upstream signatures are limited to emails and texts, that would seem to fix the problem. But upstream wouldn’t necessarily be limited to emails and texts — upstream collection would be particularly valuable for searching on other kinds of selectors, such as an encryption key, and there may be more than one person who would use those other kinds of selectors. And when Collyer says, “NSA may target for acquisition a particular ‘selector,’ which is typically a facility such as a telephone number or e-mail address,” I worry she’s unaware or simply not ensuring that NSA won’t use upstream to search for non-typical signatures that might function as abouts even if they’re not “content.” The problem is treating this as a content/metadata distinction, when “metadata” (however far down in the packet you go) could include stuff that functions like an about selector.

Defining key terms terms

Collyer did define “active user,” however inadequately. But there are a number of other terms that go undefined in this opinion. By far the funniest is when Collyer notes that the government’s March 30 submission promises to sequester upstream data that is stored in “institutionally managed repositories.” In a footnote, she notes they don’t define the term. Then she pretty much drops the issue. This comes in an opinion that shows FBI data has been wandering around in repositories it didn’t belong and indicating that NSA can’t identify where all its 704 data is. Yet she’s told there is some other kind of repository and she doesn’t make a point to figure out what the hell that means.

Later, in a discussion of other violations, Collyer introduces the term “data object,” which she always uses in quotation marks, without explaining what that is.

Failing to appoint (or even consider) amicus

In any case, this opinion makes clear that what should have happened, years ago, is a careful discussion of how packet sniffing works, and where a packet collected by a backbone provider stops being metadata and starts being content, and all the kinds of data NSA might want to and does collect via domestic packet sniffing. (They collect far more under EO 12333.) As mentioned, some of that discussion may have taken place in advance of the 2004 and 2010 opinions approving upstream collection of Internet metadata (though, again, I’m now convinced NSA was always lying about what it would take to process that data). But there’s no evidence the discussion has ever happened when discussing the collection of upstream content. As a result, judges are still using made up terms like MCTs, rather than adopting terms that have real technical meaning.

For that reason, it’s particularly troubling Collyer didn’t use — didn’t even consider using, according to the available documentation — an amicus. As Collyer herself notes, upstream surveillance “has represented more than its share of the challenges in implementing Section 702” (and, I’d add, Internet metadata collection).

At a minimum, when NSA was pitching fixes to this, she should have stopped and said, “this sounds like a significant decision” and brought in amicus Amy Jeffress or Marc Zwillinger to help her think through whether this solution really fixes the problem. Even better, she should have brought in a technical expert who, at a minimum, could have explained to her that SCTs pose as big a problem as MCTs; Steve Bellovin — one of the authors of this paper that explores the content versus metadata issue in depth — was already cleared to serve as the Privacy and Civil Liberties Oversight Board’s technical expert, so presumably could easily have been brought into consult here.

That didn’t happen. And while the decision whether or not to appoint an amicus is at the court’s discretion, Collyer is obligated to explain why she didn’t choose to appoint one for anything that presents a significant interpretation of the law.

A court established under subsection (a) or (b), consistent with the requirement of subsection (c) and any other statutory requirement that the court act expeditiously or within a stated time–

(A) shall appoint an individual who has been designated under paragraph (1) to serve as amicus curiae to assist such court in the consideration of any application for an order or review that, in the opinion of the court, presents a novel or significant interpretation of the law, unless the court issues a finding that such appointment is not appropriate;

For what it’s worth, my guess is that Collyer didn’t want to extend the 2015 certificates (as it was, she didn’t extend them as long as NSA had asked in January), so figured there wasn’t time. There are other aspects of this opinion that make it seem like she just gave up at the end. But that still doesn’t excuse her from explaining why she didn’t appoint one.

Instead, she wrote a shitty opinion that doesn’t appear to fully understand the issue and that defers, once again, the issue of what counts as content in a packet.

Approving back door upstream searches

Collyer’s failure to appoint an amicus is most problematic when it comes to her decision to reverse John Bates’ restriction on doing back door searches on upstream data.

To restate what I suggested above, by all appearances, NSA largely blew off the Bates’ restriction. Indeed, Collyer notes in passing that, “In practice, however, no analysts received the requisite training to work with the segregated MCTs.” Given the persistent problems with back door searches on upstream data, it’s hard to believe NSA took that restriction seriously at all (particularly since it refused to consider a technical fix to the requirement to exclude upstream from searches). So Collyer’s approval of back door searches of upstream data is, for all intents and purposes, the sanctioning of behavior that NSA refused to stop, even when told to.

And the way in which she sanctions it is very problematic.

First, in spite of her judgment that ending about searches would fix the problems in (as she described it) MCT collection, she nevertheless laid out a scenario (see page 27) where an MCT would acquire an entirely domestic communication.

Having laid out that there will still be some entirely domestic comms in the collection, Collyer then goes on to say this:

The Court agrees that the removal of “abouts” communications eliminates the types of communications presenting the Court the greatest level of constitutional and statutory concern. As discussed above, the October 3, 2011 Memorandum Opinion (finding the then-proposed NSA Minimization Procedures deficient in their handling of some types of MCTs) noted that MCTs in which the target was the active user, and therefore a party to all of the discrete communications within the MCT, did not present the same statutory and constitutional concerns as other MCTs. The Court is therefore satisfied that queries using U.S.-person identifiers may now be permitted to run against information obtained by the above-described, more limited form of upstream Internet collection, subject to the same restrictions as apply to querying other forms of Section

This is absurd! She has just laid out that there will be some exclusively domestic comms in the collection. Not as much as there was before NSA stopped collecting abouts, but it’ll still be there. So she’s basically permitting domestic communications to be back door searched, which, if they’re found (as she notes), might be kept based on some claim of foreign intelligence value.

And this is where her misunderstanding of the MCT/SCT distinction is her undoing. Bates prohibited back door searching of all upstream data, both that supposedly segregated because it was most likely to have unrelated domestic communications in it, and that not segregated because even the domestic communications would have intelligence value. Bates’ specific concerns about MCTs are irrelevant to his analysis about back door searches, but that’s precisely what Collyer cites to justify her own decision.

She then applies the 2015 opinion, with its input from amicus Amy Jeffress stating that NSA back door searches that excluded upstream collection were constitutional, to claim that back door searches that include upstream collection would meet Fourth Amendment standards.

The revised procedures subject NSA’s use of U.S. person identifiers to query the results of its newly-limited upstream Internet collection to the same limitations and requirements that apply to its use of such identifiers to query information acquired by other forms of Section 702 collection. See NSA Minimization Procedures § 3(b)(5). For that reason, the analysis in the November 6, 2015 Opinion remains valid regarding why NSA’s procedures comport with Fourth Amendment standards of reasonableness with regard to such U.S. person queries, even as applied to queries of upstream Internet collection. (63)

As with her invocation of Bates’ 2011 opinion, she applies analysis that may not fully apply to the question — because it’s not actually clear that the active user restriction really equates newly limited upstream collection to PRISM collection — before her as if it does.

Imposing no consequences

The other area where Collyer’s opinion fails to meet the standards of prior ones is in resolution of the problem. In 2009, when Reggie Walton was dealing with first phone and then Internet dragnet problems, he required the NSA to do complete end-to-end reviews of the programs. In the case of the Internet dragnet, the report was ridiculous (because it failed to identify that the entire program had always been violating category restrictions). He demanded IG reports, which seems to be what led the NSA to finally admit the Internet dragnet program was broken. He shut down production twice, first of foreign call records, from July to September 2009, then of the entire Internet dragnet sometime in fall 2009. Significantly, he required the NSA to track down and withdraw all the reports based on violative production.

In 2010 and 2011, dealing with the Internet dragnet and upstream problems, John Bates similarly required written details (and, as noted, actual volume of the upstream problem). Then, when the NSA wanted to retain the fruits of its violative collection, Bates threatened to find NSA in violation of 50 USC 1809(a) — basically, threatened to declare them to be conducting illegal wiretapping — to make them actually fix their prior violations. Ultimately, NSA destroyed (or said they destroyed) their violative collection and the fruits of it.

Even Thomas Hogan threatened NSA with 50 USC 1809(a) to make them clean up willful flouting of FISC orders.

Not Collyer. She went from issuing stern complaints (John Bates was admittedly also good at this) back in October…

At the October 26, 2016 hearing, the Court ascribed the government’s failure to disclose those IG and OCO reviews at the October 4, 2016 hearing to an institutional “lack of candor” on NSA’s part and emphasized that “this is a very serious Fourth Amendment issue.”

… to basically reauthorizing 702 before using the reauthorization process as leverage over NSA.

Of course, NSA still needs to take all reasonable and necessary steps to investigate and close out the compliance incidents described in the October 26, 2016 Notice and subsequent submissions relating to the improper use of U.S.-person identifiers to query terms in NSA upstream data. The Court is approving on a going-foward basis, subject to the above-mentioned requirements, use of U.S.-person identifiers to query the results of a narrower form of Internet upstream collection. That approval, and the reasoning that supports it, by no means suggest that the Court approves or excuses violations that occurred under the prior procedures.

That is particularly troubling given that there is no indication, even six months after NSA first (belatedly) disclosed the back door search problems to FISC, that it had finally gotten ahold of the problem.

As Collyer noted, weeks before it submitted its new application, NSA still didn’t know where all the upstream data lived. “On March 17, 2017, the government reported that NSA was still attempting to identify all systems that store upstream data and all tools used to query such data.” She revealed that  some of the queries of US persons do not interact with “NSA’s query audit system,” meaning they may have escaped notice forever (I’ve had former NSA people tell me even they don’t believe this claim, as seemingly nothing should be this far beyond auditability). Which is presumably why, “The government still had not ascertained the full range of systems that might have been used to conduct improper U.S.-person queries.” There’s the data that might be in repositories that weren’t run by NSA, alluded to above. There’s the fact that on April 7, even after NSA submitted its new plan, it was discovering that someone had mislabeled upstream data as PRISM, allowing it to be queried.

Here’s the thing. There seems to be no way to have that bad an idea of where the data is and what functions access the data and to be able to claim — as Mike Rogers, Dan Coats, and Jeff Sessions apparently did in the certificates submitted in March that didn’t get publicly released — to be able to fulfill the promises they made FISC. How can the NSA promise to destroy upstream data at an accelerated pace if it admits it doesn’t know where it is? How can NSA promise to implement new limits on upstream collection if that data doesn’t get audited?

And Collyer excuses John Bates’ past decision (and, by association, her continued reliance on his logic to approve back door searches) by saying the decision wasn’t so much the problem, but the implementation of it was.

When the Court approved the prior, broader form of upstream collection in 2011, it did so partly in reliance on the government’s assertion that, due to some communications of foreign intelligence interest could only be acquired by such means. $ee October 3, 2011 Memorandum Opinion at 31 & n. 27, 43, 57-58. This Opinion and Order does not question the propriety of acquiring “abouts” communications and MCTs as approved by the Court since 2011, subject to the rigorous safeguards imposed on such acquisitions. The concerns raised in the current matters stem from NSA’s failure to adhere fully to those safeguards.

If problems arise because NSA has failed, over 6 years, to adhere to safeguards imposed because NSA hadn’t adhered to the rules for the 3 years before that, which came after NSA had just blown off the law itself for the 6 years before that, what basis is there to believe they’ll adhere to the safeguards she herself imposed, particularly given that unlike her predecessors in similar moments, she gave up any leverage she had over the agency?

The other thing Collyer does differently from her predecessors is that she lets NSA keep data that arose from violations.

Certain records derived from upstream Internet communications (many of which have been evaluated and found to meet retention standards) will be retained by NSA, even though the underlying raw Internet transactions from which they are derived might be subject to destruction. These records include serialized intelligence reports and evaluated and minimized traffic disseminations, completed transcripts and transcriptions of Internet transactions, [redacted] information used to support Section 702 taskings and FISA applications to this Court, and [redacted].

If “many” of these communications have been found to meet retention standards, it suggests that “some” have not. Meaning they should never have been retained in the first place. Yet Collyer lets an entire stream of reporting — and the Section 702 taskings that arise from that stream of reporting — remain unrecalled. Effectively, even while issuing stern warning after stern warning, by letting NSA keep this stuff, she is letting the agency commit violations for years without any disincentive.

Now, perhaps Collyer is availing herself of the exception offered in Section 301 of the USA Freedom Act, which permits the government to retain illegally obtained material if it is corrected by subsequent minimization procedures.

Exception.–If the Government corrects any deficiency identified by the order of the Court under subparagraph (B), the Court may permit the use or disclosure of information obtained before the date of the correction under such minimization procedures as the Court may approve for purposes of this clause.

Except that she doesn’t cite that provision, nor is there any evidence deficiencies have been corrected.

Which should mean, especially given the way Collyer depends on the prior opinions of Bates and Hogan, she should likewise rely on their practice of treating this as a potential violation of 50 USC 1809(a) to ensure the harm to Americans doesn’t persist. She did no such thing, basically sanctioning the illegal use of back door searches to spy on Americans.

Up until this opinion, I was generally willing to argue for the efficacy of the FISC (even while arguing the job could and should be devolved to district courts for more rigorous testing of the law). But not now. This opinion discredits the entire court.

Last April when Collyer became presiding FISC judge, I pointed to what I considered Rosemary Collyer’s worst FISC decision, which was actually a District Court opinion that permitted the NSA to keep aspects of its upstream problems secret from EFF, which is suing over those same issues. I predicted then that, “I fear she will be a crummy presiding judge, making the FISC worse than it already is.”

In my opinion — as a civil libertarian who has been willing to defend the FISC in the past — with this opinion she has done real damage to any credibility or legitimacy the FISC has.

Update: Latter for former fixed in which choice the Administration picked, h/t CS.

The Documents

Here’s what I Con the Record released.

January 7, 2016 IG Report

This heavily redacted report describes a review of NSA’s compliance with 704/705b of Title VII of FISA, the authority NSA uses to spy on Americans who are located overseas (see my report on the 704 problems here). It was conducted from March through August 2015 and reviewed data from January through March 2015. It basically showed there were no compliance mechanisms in place for 704/705b, and NSA couldn’t even reliably identify the queries that had been conducted under the authority. This report is relevant to the reauthorization, because Americans targeted in individual FISA orders are approved (and almost certainly tasked) by default for 702 back door searches. Though the report was obviously done well before the 702 certifications were submitted on September 26, was not noticed to FISC until days before the court would otherwise have approved the certifications in conjunction with the upstream problems.

September 26, 2016 702 Certification Package 

ICTR released much if not all of the materials submitted for 702 reauthorization on September 2016. The package includes:

Certification cover filing: This is basically the application, which the metadata reveals is actually two parts merged. It describes the changes to the certificates from the past year, most notably a request to share raw 702 data directly from NSA or FBI to NCTC, some tweaks to the FBI targeting and minimization procedures, and permission for NSA, FBI, and CIA to deviate from minimization procedures to develop a count of how many US persons get collected under 702.

The report also describes how the government has fulfilled reporting requirements imposed in 2015. Several of the reports pertain to destroying data it should not have had. The most interesting one is the report on how many criminal queries of 702 data FBI does that result in the retrieval and review of US person data; as I note in this post, the FBI really didn’t (and couldn’t, and can’t, given the oversight regime currently in place) comply with the intent of the reporting requirement.

Very importantly: this application did not include any changes to upstream collection, in large part because NSA did not tell FISC (more specifically, Chief Judge Rosemary Collyer) about the problems they had always had preventing queries of upstream data in its initial application. In NSA’s April statement on ending upstream about collection, it boasts, “Although the incidents were not willful, NSA was required to, and did, report them to both Congress and the FISC.” But that’s a load of horse manure: in fact, NSA and DOJ sat on this information for months. And even with this disclosure, because the government didn’t release the later application that did describe those changes, we don’t actually get to see the government’s description of the problems; we only get to see Collyer’s (I believe mis-) understanding of them.

Procedures and certifications accepted: The September 26 materials also include the targeting and minimization procedures that were accepted in the form in which they were submitted on that date. These include:

Procedures and certificates not accepted: The materials include the documents that the government would have to change before approval on April 26. These include,

Note, I include the latter two items because I believe they would have had to be resubmitted on March 30, 2017 with the updated NSA documents and the opinion makes clear a new DIRNSA affidavit was submitted (see footnote 10), but the release doesn’t give us those. I have mild interest in that, not least because the AG/DNI one would be the first big certification to FISC signed by Jeff Sessions and Dan Coats.

October 26, 2016 Extension

The October 26 extension of 2015’s 702 certificates is interesting primarily for its revelation that the government waited until October 24, 2016 to disclose problems that had been simmering since 2013.

March 30, 2017 Submissions

The release includes two of what I suspect are at least four items submitted on March 30, which are:

April 26, 2017 Opinion

This is the opinion that reauthorized 702, with the now-restricted upstream search component. My comments below largely lay out the problems with it.

April 11, 2017 ACLU Release

I Con the Record also released the FOIAed documents released earlier in April to ACLU, which are on their website in searchable form here. I still have to finish my analysis of that (which includes new details about how the NSA was breaking the law in 2011), but these posts cover some of those files and are relevant to these 702 changes:

Importantly, the ACLU documents as a whole reveal what kinds of US persons are approved for back door searches at NSA (largely, but not exclusively, Americans for whom an individual FISA order has already been approved, importantly including 704 targets, as well as more urgent terrorist targets), and reveal that one reason NSA was able to shut down the PRTT metadata dragnet in 2011 was because John Bates had permitted them to query the metadata from upstream collection.

Not included

Given the point I noted above — that the application submitted on September 26 did not address the problem with upstream surveillance and that we only get to see Collyer’s understanding of it — I wanted to capture the documents that should or do exist that we haven’t seen.

  • October 26, 2016 Preliminary and Supplemental Notice of Compliance Incidents Regarding the Querying of Section 702-Acquired Data
  • January 3, 2017: Supplemental Notice of Compliance Incidents Regarding the Querying of Section 702-Acquired Data
  • NSA Compliance Officer (OCO) review covering April through December 2015
  • OCO review covering April though July of 2016
  • IG Review covering first quarter of 2016 (22)
  • January 27, 2017: Letter In re: DNI/AG 702(g) Certifications asking for another extension
  • January 27, 2017: Order extending 2015 certifications (and noting concern with “important safeguards for interests protected by the Fourth Amendment”)
  • March 30, 2017: Amendment to [Certificates]; includes (or is) second explanatory memo, referred to as “March 30, 2017 Memorandum” in Collyer’s opinion; this would include a description of the decision to shut down about searches
  • March 30, 2017 AG/DNI Certification (?)
  • March 30, 2017 DIRNSA Certification
  • April 7, 2017 preliminary notice

Other Relevant Documents

Because they’re important to this analysis and get cited extensively in Collyer’s opinion, I’m including:

Timeline

November 30, 2013: Latest possible date at which upstream search problems identified

October 2014: Semiannual Report shows problems with upstream searches during period from June 1, 2013 – November 30, 2013

October 2014: SIGINT Compliance (SV) begins helping NSD review 704/705b compliance

June 2015: Semiannual Report shows problems with upstream searches during period from December 1, 2013 – May 31, 2014

December 18, 2015: Quarterly Report to the FISC Concerning Compliance Matters Under Section 702 of FISA

January 7, 2016: IG Report on controls over §§704/705b released

January 26, 2016: Discovery of error in upstream collection

March 9, 2016: FBI releases raw data

March 18, 2016: Quarterly Report to the FISC Concerning Compliance Matters Under Section 702 of FISA

May and June, 2016: Discovery of querying problem dating back to 2012

May 17, 2016: Opinion relating to improper retention

June 17, 2016: Quarterly Report to the FISC Concerning Compliance Matters Under Section 702 of FISA

August 24, 2016: Pre-tasking review update

September 16, 2016: Quarterly Report to the FISC Concerning Compliance Matters Under Section 702 of FISA

September 26, 2016: Submission of certifications

October 4, 2016: Hearing on compliance issues

October 24, 2016: Notice of compliance errors

October 26, 2016: Formal notice, with hearing; FISC extends the 2015 certifications to January 31, 2017

November 5, 2016: Date on which 2015 certificates would have expired without extension

December 15, 2016: James Clapper approves EO 12333 Sharing Procedures

December 16, 2016: Quarterly Report to the FISC Concerning Compliance Matters Under Section 702 of FISA

December 29, 2016: Government plans to deal with indefinite retention of data on FBI systems

January 3, 2017: DOJ provides supplemental report on compliance programs; Loretta Lynch approves new EO 12333 Sharing Procedures

January 27, 2017: DOJ informs FISC they won’t be able to fully clarify before January 31 expiration, ask for extension to May 26; FISC extends to April 28

January 31, 2007: First extension date for 2015 certificates

March 17, 2017:Quarterly Report to the FISC Concerning Compliance Matters Under Section 702 of FISA; Probable halt of upstream “about” collection

March 30, 2016: Submission of amended NSA certifications

April 7, 2017: Preliminary notice of more query violations

April 28, 2017: Second extension date for 2015 certificates

May 26, 2017: Requested second extension date for 2015 certificates

June 2, 2017: Deadline for report on outstanding issues

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

FBI Rewrote the Backdoor Search Query Requirement

In her opinion approving the April 26 certifications (which may be one of the most unimpressive FISC opinions I’ve read), Rosemary Collyer borrowed heavily on the 2015 authorization in finding this year’s constitutional. As such she refers to Thomas Hogan’s imposition of a reporting requirement for any back door searches “in which FBI personnel receive and review Section 702-acquired information that the FBI identifies as concerning a United States person in response to a query that is not designed to find and extract foreign intelligence information.”

She then describes the one incident reported this year: basically an Agent seeing an email of someone referring to violence toward children. The Agent searched on the person who allegedly committed the violence and the names of the children, only to find the same email again. The Agent reported the suspected child abuse to the local child protective services.

But, she reveals, no one reported this until DOJ’s National Security Division asked about such reporting during their review.

The Court notes, however, that the FBI did not identify those queries as responsive to the Court’s reporting requirement until NSD asked whether any such queries had been made in the course of gathering information about the Section I.F dissemination. Notice at 2. The Court is carrying forward this reporting requirement and expects the government to take further steps to ensure compliance with it.

There are several reasons this is troublesome.

First, the incident would have gone unreported unless someone felt obliged to be honest when asked specifically about it (ODNI/DOJ don’t do reviews in all field offices, so not everyone will get asked).

Moreover, the incident got reported not because it was “receive[d] and reviewe[d],” but because it was disseminated. So there may be a great deal of back door searches that get received and reviewed but because they don’t constitute evidence of a crime, aren’t disseminated, with the consequent paper trail.

Finally, this means certain kinds of criminal searches won’t be reported: those where FBI gets a criminal tip, then looks on their 702 data, only to find something they might use to coerce informants. Information used to coerce informants would suddenly become foreign intelligence information, so no longer subject to the reporting requirement.

To meet the actual requirement from FISC — rather than the one they’re willing to comply with — FBI needs to dramatically restructure the compliance to this reporting requirement, to measure when a search is done for criminal purposes, and then — as soon as an agent conducts that review — gets noticed to the FISC.

Of course, that would require precisely the kind of tracking the FBI has refused to do. Their arbitrary rewriting of this requirement demonstrates why.

Update: In application for certificates submitted on September 26, 2016, DOJ said this about its back door searches:

In a latter filed on December 4, 2015, the government noted that there is no automated way for the FBI to track whether a query is run solely for a foreign intelligence purpose, to extract evidence of a crime, or both. However, the December 4, 2015 letter detailed the processes the FBI put in place to attempt to identify those queries that are run in FBI systems containing raw 702-acquired information after December 4, 2015, that are designed to extract evidence of a crime. In addition, the December 4, 2015 letter explained that FBI had issued guidance to its personnel about this reporting requirement and the process to enable FBI to centrally track such scenarios and report any such queries to NSD that would fall under the reporting requirement described above. Additionally, NSD conducts minimization reviews in multiple FBI field offices each year. As part of these minimization reviews, NSD and FBI National Security Law Branch have emphasized the above requirements and processes during field office training. Further, during the minimization reviews, NSD audits a sample of queries performed by FBI personnel in the databases storing raw FISA-acquired information, including raw section 702-acquired information. Since December 2015, NSD has reviewed these queries to determine if any such queries were conducted solely for the purpose of retaining evidence of a crime. If such a query was conducted, NSD would seek additional information from the relevant FBI personnel as to whether FBI personnel received and reviewed section 702-acquired information of or concerning a U.S. person in response to such a query. Since the above processes were put in place in December 2015, FBI and NSD have not identified any instance in which FBI personnel have received and reviewed section 702-acquired information of or concerning a United States person in response to a query that is not designed to find and extract foreign intelligence information.

There are several key details here.

First, DOJ reported no queries on September 26, which means the query must have happened after that (though it’s not clear whether Collyer’s opinion would reflect the most recent reporting).

It’s also clear DOJ will only find these in spot checks. As DOJ makes clear here (and as was misrepresented at a recent hearing), NSD and ODNI don’t actually visit every FBI office (though I’m sure they hit SDNY, EDNY, DC, EDVA, MD, and NDCA routinely, which are the biggest national security offices). That means there’s not going to be a chance to find many possible queries.

There’s also some fuzzy language here. I’m particularly intrigued by this double usage of “FBI personnel,” as if someone from outside of FBI does review this, perhaps on an analytical contract.

If such a query was conducted, NSD would seek additional information from the relevant FBI personnel as to whether FBI personnel received and reviewed section 702-acquired information of or concerning a U.S. person in response to such a query.

Or perhaps FBI calls up NSA and asks them to access the same content?

Finally, it’s clear the definition FBI is using, with respect to “foreign intelligence, crime, or both” permits generalized queries (in part to see if there’s intelligence to use to coerce someone to be an informant) that could serve either purpose. Such an approach cannot measure how much more often someone more likely to talk with a 702 target — like Muslims or Chinese-Americans — get pursued for crimes after a longer assessment decides against using the person as an informant.

Which is another way of saying that this metric is not measuring what Judge Hogan wanted it to measure.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

I Con the Record Transparency Bingo (1): Only One Positive Hit on a Criminal Search

As we speak, a bunch of privacy experts are on Twitter trying to make sense of I Con the Record’s transparency report, which is a testament to the fact that the Transparency Report obfuscates as much as makes transparent (and the degree to which you need to have read a great deal of other public reports to understand these things).

So I’m going to deal with the obvious errors I’m seeing made as I see them, then will do a more comprehensive working thread.

The first confusion I’m seeing pertains to this factoid showing how many US person queries designed to return criminal information returned a positive hit.

First, it is not the case that this number, 1, means the FBI affirmatively searched a dedicated FISA 702 database for criminal data and only found data once. The FISA 702 data, the traditional FISA data, and other data are all mixed in together. What this means is when the FBI searched databases including that FISA 702 data and other stuff looking for information on a criminal case, on just one occasion did they get a positive hit showing evidence of a non-national security crime that landed in the database via Section 702 and no other authority (some amount of this information will come into the database via multiple authorities), then obtain that information (whether via their own 702 clearance or by asking a buddy cleared into 702), and review it.

So right off the bat, there are some things this number doesn’t include: positive hits on criminal queries that a person receives but doesn’t receive and review. One reason they might get a positive hit they don’t review is if a non-cleared person doesn’t go through the effort to get a FISA-cleared person to access it. But as I pointed out when the opinion ordering this count got released, there are other possibilities.

FBI’s querying system can be set such that, even if someone has access to 702 data, they can run a query that will flag a hit in 702 data but won’t actually show the data underlying that positive return. This provides one way for 702-cleared people to learn that such information is in such a collection and — if they want the data without having to report it — may be able to obtain it another way. It is distinctly possible that once NSA shares EO 12333 data directly with FBI, for example, the same data will be redundantly available from that in such a way that would not need to be reported to FISC. (NSA used this arbitrage method after the 2009 problems with PATRIOT-authorized database collections.)

Furthermore, this will only count a positive hit if the Agent is making an exclusively criminal search. Hogan’s opinion and (we now know from some recently liberated documents) the underlying discussion didn’t deal with the full scope of queries done for assessment reasons in the name of national security, such as profiling various ethnic communities or more generally searching on leads identified via national security mapping. Those queries would count as national security queries, but a big point of doing them would be to find derogatory information, including evidence of criminal behavior, to use to recruit informants.

Finally, consider how the Attorney General Guidelines defines Foreign Intelligence information.

Plus, such reporting depends on the meaning of foreign intelligence information as defined under the Attorney General Guidelines.

FOREIGN INTELLIGENCE: information relating to the capabilities, intentions, or activities of foreign governments or elements thereof, foreign organizations or foreign persons, or international terrorists.

It would be relatively easy for FBI to decide that any conversation with a foreign person constituted foreign intelligence, and in so doing count even queries on US persons to identify criminal evidence as foreign intelligence information and therefore exempt from the reporting guidance. Certainly, the kinds of queries that might lead the FBI to profile St. Paul’s Somali community could be considered a measure of Somali activities in that community. Similarly, FBI might claim the search for informants who know those in a mosque with close ties overseas could be treated as the pursuit of information on foreign activities in US mosques.

As I understand it, the reporting to Congress on this has been a bit more circumspect than members might have liked. That means the other details FISC judge Thomas Hogan required about this one positive hit — what query resulted in a positive hit, what kind of investigative action it led to, and why FBI believes it to fall under minimization procedures — aren’t as sexy as this number, 1.

Prior to this positive hit, the FBI had always assured oversight authorities that the possibility that Section 702 data would result in criminal information was “theoretical.”

Even as a factoid of limited meaning, it does mean the possibility is no longer theoretical.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

Since September 20, 2012, FBI Has Been Permitted to Share FISA-Derived Hacking Information with Internet Service Providers

As I noted, yesterday Reuters reported that in 2015, Yahoo had been asked to scan its incoming email for certain strings. Since that time, Yahoo has issued a non-denial denial saying the story is “misleading” (but not wrong) because the “mail scanning described in the article does not exist on our systems.”

As I suggested yesterday, I think this most likely pertains to a cybersecurity scan of some sort, in part because FISC precedents would seem to prohibit most other uses of this. I’ve addressed a lot of issues pertaining to the use of Section 702 for cybersecurity purposes here; note that FISC might approve something more exotic under a traditional warrant, especially if Yahoo were asked to scan for some closely related signatures.

If you haven’t already, you should read my piece on why I think CISA provided the government with capabilities it couldn’t get from a 702 cyber certificate, which may explain why the emphasis on present tense from Yahoo is of particular interest. I think it quite likely tech companies conduct scans using signatures from the government now, voluntarily, under CISA. It’s in their best interest to ID if their users get hacked, after all.

But in the meantime, I wanted to point out this language in the 2015 FBI minimization procedures which, according to this Thomas Hogan opinion (see footnote 19), has been in FBI minimization procedures in some form since September 20, 2012, during a period when FBI badly wanted a 702 cyber certificate.

The FBI may disseminate FISA-acquired information that … is evidence of a crime and that it reasonably believes may assist in the mitigation or prevention of computer intrusions or attacks to private entities or individuals that have been or are at risk of being victimized by such intrusions or attacks, or to private entities or individuals (such as Internet security companies and Internet Service Providers) capable of providing assistance in mitigating or preventing such intrusions or attacks. Wherever reasonably practicable, such dissemination should not include United States person identifying information unless the FBI reasonably believes it is necessary to enable the recipient to assist in the mitigation or prevention of computer intrusions or attacks. [my emphasis]

This is not surprising language: it simply permits the FBI (but not, according to my read of the minimization procedures, NSA) to share cyber signatures discovered using FISA with private sector companies, either to help them protect themselves or because private entities (specifically including ISPs) might provide assistance in mitigating attacks.

To be sure, the language falls far short of permitting FBI to demand PRISM providers like Yahoo to use the signatures to scan their own networks.

But it’s worth noting that Thomas Hogan approved a version of this language (extending permitted sharing even to physical infrastructure and kiddie porn) in 2014. He remained presiding FISA judge in 2015, and as such would probably have reviewed any exotic or new programmatic requests. So it would not be surprising if Hogan were to approve a traditional FISA order permitting FBI (just as one possible example) to ask for evidence on a foreign-used cyber signature. Sharing a signature with Yahoo — which was already permitted under minimization procedures — and asking for any  results of a scan using it would not be a big stretch.

There’s one more detail worth remembering: way back the last time Yahoo challenged a PRISM order in 2007, there was significant mission creep in the demands the government made of Yahoo. In August 2007, when Yahoo was initially discussing compliance (but before it got its first orders in November 2007), the requests were fairly predictable: by my guess, just email content. But by the time Yahoo started discussing actual compliance in early 2008, the requests had expanded, apparently to include all of Yahoo’s services  (communication services, information services, storage services), probably even including information internal to Yahoo on its users. Ultimately, already in 2008, Yahoo was being asked to provide nine different things on users. Given Yahoo’s unique visibility into the details of this mission creep, their lawyers may have reason to believe that a request for packet sniffing or something similar would not be far beyond what FISCR approved way back in 2008.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

The Government Uses FISCR Fast Track to Put Down Judges’ Rebellion, Expand Content Collection

Since it was first proposed, I’ve been warning (not once but twice!) about the FISCR Fast Track, a part of the USA Freedom Act that would permit the government to immediately ask the FISA Court of Review to review a FISC decision. The idea was sold as a way to get a more senior court to review dodgy FISC decisions. But as I noted, it was also an easy way for the government to use the secretive FISC system to get a circuit level decision that might preempt traditional court decisions they didn’t like (I feared they might use FISCR to invalidate the Second Circuit decision finding the phone dragnet to be unlawful, for example).

Sure enough, that’s how it got used in its first incarnation — not just to confirm that the FISC can operate by different rules than criminal courts, but also to put down a judges rebellion.

As I noted back in 2014, the FISC has long permitted the government to collect Post Cut Through Dialed Digits using FISA pen registers, though it requires the government to minimize anything counted as content after collection. PCTDD are the numbers you dial after connecting a phone call — perhaps to get a particular extension, enter a password, or transfer money. The FBI is not supposed to do this at the criminal level, but can do so under FISA provided it doesn’t use the “content” (like the banking numbers) afterwards. FISC reviewed that issue in 2006 and 2009 (after magistrates in the criminal context deemed PCTDD to be content that was impermissible).

At least year’s semiannual FISC judges’ conference, some judges raised concerns about the FISC practice, deciding they needed to get further briefing on the practice. So when approving a standing Pen Register, the FISC told the government it needed further briefing on the issue.

Screen Shot 2016-08-22 at 5.39.13 PM

The government didn’t deal with it for three months until just as they were submitting their next application. At that point, there was not enough time to brief the issue at the FISC level, which gave then presiding judge Thomas Hogan the opportunity to approve the PRTT renewal and kick the PCTDD issue to the FISCR, with an amicus.

Screen Shot 2016-08-22 at 5.43.08 PM

This minimized the adversarial input, but put the question where it could carry the weight of a circuit court.

Importantly, when Hogan kicked the issue upstairs, he did not specify that this legal issue applies only to phone PRTTs.

Screen Shot 2016-08-22 at 5.45.02 PM

At the FISCR, Mark Zwillinger got appointed as an amicus. He saw the same problem as I did. While the treatment of phone PCTDD is bad but, if properly minimized, not horrible, it becomes horrible once you extend it to the Internet.

Screen Shot 2016-08-22 at 5.59.12 PM

The FISCR didn’t much care. They found the collection of content using a PRTT, then promising not to use it except to protect national security (and a few other exceptions to the rule that the government has to ask FISC permission to use this stuff) was cool.

Screen Shot 2016-08-22 at 5.47.34 PM

Along the way, the FISCR laid out several other precedents that will have really dangerous implications. One is that content to a provider may not be content.

Screen Shot 2016-08-22 at 5.55.29 PM

This is probably the issue that made the bulk PRTT dragnet illegal in the first place (and created problems when the government resumed it in 2010). Now, the problem of collecting content in packets is eliminated!

Along with this, the FISCR extended the definition of “incidental” to apply to a higher standard of evidence.

Screen Shot 2016-08-22 at 6.07.50 PM

Thus, it becomes permissible to collect using a standard that doesn’t require probable cause something that does, so long as it is “minimized,” which doesn’t always mean it isn’t used.

Finally, FISCR certified the redefinition of “minimization” that FISC has long adopted (and which is crucial in some other programs). Collecting content, but then not using it (except for exceptions that are far too broad), is all good.

Screen Shot 2016-08-22 at 6.01.41 PM

In other words, FISCR not only approved the narrow application of using calling card data but not bank data and passwords (except to protect national security). But they also approved a bunch of other things that the government is going to turn around and use to resume certain programs that were long ago found problematic.

I don’t even hate to say this anymore. I told privacy people this (including someone involved in this issue personally). I was told I was being unduly worried. This is, frankly, even worse than I expected (and of course it has been released publicly so the FBI can start chipping away at criminal protections too).

Yet another time my concerns have been not only borne out, but proven to be insufficiently cynical.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

The US Person Back Door Search Number DOJ Could Publish Immediately

The Senate Judiciary Committee had a first public hearing on Section 702 today, about which I’ll have several posts.

One piece of good news, however, is that both some of the witnesses (Liza Goitein and David Medine; Ken Wainstein, Matt Olsen, and Rachel Brand were the other witnesses) and some of the Senators supported more transparency, including requiring the FBI to provide a count of how many US person queries of 702-collected data it does, as well as a count of how many US persons get sucked up by Section 702 more generally.

Liza Goitein presented a very reasonable view of the efforts the privacy community is making to work with the government to come up with reasonable counts.

But no one mentioned the very easy count of US person back door searches that FBI could provide today.

As I noted when this was released, as part of last year’s 702 Certification process, Judge Thomas Hogan required FBI to report every time FBI reviews data on a US person query of 702 data that doesn’t pertain to National Security.

[Hogan] imposed a requirement that FBI “submit in writing a report concerning each instance … in which FBI personnel receive and review Section 702-acquired information that the FBI identifies as concerning a United States person in response to a query that is not designed to find and extract foreign intelligence information.” Such reporting, if required indefinitely, is worthwhile — and should have been required by Congress under USA Freedom Act.

But FBI can and presumably will game this information in two ways. First, FBI’s querying system can be set such that, even if someone has access to 702 data, they can run a query that will flag a hit in 702 data but won’t actually show the data underlying that positive return. This provides one way for 702-cleared people to learn that such information is in such a collection and — if they want the data without having to report it — may be able to obtain it another way. It is distinctly possible that once NSA shares EO 12333 data directly with FBI, for example, the same data will be redundantly available from that in such a way that would not need to be reported to FISC. (NSA used this arbitrage method after the 2009 problems with PATRIOT-authorized database collections.)

Plus, such reporting depends on the meaning of foreign intelligence information as defined under the Attorney General Guidelines.

FOREIGN INTELLIGENCE: information relating to the capabilities, intentions, or activities of foreign governments or elements thereof, foreign organizations or foreign persons, or international terrorists.

It would be relatively easy for FBI to decide that any conversation with a foreign person constituted foreign intelligence, and in so doing count even queries on US persons to identify criminal evidence as foreign intelligence information and therefore exempt from the reporting guidance. Certainly, the kinds of queries that might lead the FBI to profile St. Paul’s Somali community could be considered a measure of Somali activities in that community. Similarly, FBI might claim the search for informants who know those in a mosque with close ties overseas could be treated as the pursuit of information on foreign activities in US mosques.

Hogan imposed a worthwhile new reporting requirement. But that’s still a very far cry from conducing a fair assessment of whether FBI’s back door searches are constitutional.

This requirement went into effect on December 4, 2015, and Hogan required updates on such reporting by January 27, 2016, so FBI is already reporting on this.

It would take minimal effort for ODNI to release how many of these notices got sent to FISC — it could do it quarterly so we didn’t learn too much from the process. Maybe there wouldn’t be any notices, though for a variety of reasons I doubt it. Maybe, as I note, the number is too fake to be useful.

But it is a number, one FBI is already required to report. So they should start reporting it.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.

Rosemary Collyer’s Worst FISA Decision

In addition to adding former National Security Division head David Kris as an amicus (I’ll have more to say on this) the FISA Court announced this week that Rosemary Collyer will become presiding judge — to serve for four years — on May 19.

Collyer was the obvious choice, being the next-in-line judge from DC. But I fear she will be a crummy presiding judge, making the FISC worse than it already is.

Collyer has a history of rulings, sometimes legally dubious, backing secrecy and executive power, some of which include,

2011: Protecting redactions in the Torture OPR Report

2014: Ruling the mosaic theory did not yet make the phone dragnet illegal (in this case she chose to release her opinion)

2014: Erroneously freelance researching the Awlaki execution to justify throwing out his family’s wrongful death suit

2015: Serially helping the Administration hide drone details, even after remand from the DC Circuit

I actually think her mosaic theory opinion from 2014 is one of her (and FISC’s) less bad opinions of this ilk.

The FISC opinion I consider her most troubling, though, is not a FISC decision at all, but rather a ruling from last year in an EFF FOIA. Either Collyer let the government hide something that didn’t need hidden, or it has exploited EFF’s confusion to hide the fact that the Internet dragnet and the Upstream content programs are conducted by the same technical means, a fact that would likely greatly help EFF’s effort to show all Americans were unlawfully spied on in its Jewell suit.

Back in August 2013, EFF’s Nate Cardozo FOIAed information on the redacted opinion referred to in this footnote from John Bates’ October 3, 2011 opinion ruling that some of NSA’s upstream collected was illegal.

Screen Shot 2015-10-31 at 6.52.30 PM

Here’s how Cardozo described his FOIA request (these documents are all attached as appendices to this declaration).

Accordingly, EFF hereby requests the following records:

1. The “separate order” or orders, as described in footnote 15 of the October 3 Opinion quoted above, in which the Foreign Intelligence Surveillance Court “address[ed] Section 1809(a) and related issues”; and,

2. The case, order, or opinion whose citation was redacted in footnote 15 of the October 3 Opinion and described as “concluding that Section 1809(a)(2) precluded the Court from approving the government’s proposed use of, among other things, certain data acquired by NSA without statutory authority through its ‘upstream collection.’”

Request 2 was the only thing at issue in Collyer’s ruling. By my read, it would ask for the entire opinion the citation to which was redacted, or at least identification of the case.

EFF, of course, is particularly interested in upstream collection because it’s at the core of their many years long lawsuit in Jewell. To get an opinion that ruled upstream collection constituted unlawful collection sure would help in EFF’s lawsuit.

In her opinion, Collyer made a point of defining “upstream” surveillance by linking to the 2012 John Bates opinion resolving the 2011 upstream issues (as well as to Wikipedia!), rather than to the footnote he used to describe it in his October 3, 2011 opinion.

The opinion in question, referred to here as the Section 1809 Opinion, held that 50 U.S.C. § 1809(a)(2) precluded the FISC from approving the Government’s proposed use of certain data acquired by the National Security Agency (NSA) without statutory authority through “Upstream” collection. 3

3 “Upstream” collection refers to the acquisition of Internet communications as they transit the “internet backbone,” i.e., principal data routes via internet cables and switches of U.S. internet service providers. See [Caption Redacted], 2012 WL 9189263, *1 (FISC Aug. 24, 2012); see also https://en.wikipedia.org/wiki/Upstream_collection (last visited Oct. 19, 2015); https://en.wikipedia.org/wiki/Internet_backbone (last visited Oct. 19, 2015).

As it was, Collyer paraphrased where upstream surveillance comes from as ISPs rather than telecoms, which was redacted in the opinion she cited. But by citing that and not Bates’ 2011 opinion, she excluded an entirely redacted sentence from the footnote Bates used to explain it, which in context may have described a little more about the underlying opinion.

Screen Shot 2016-04-28 at 11.38.32 AM

Having thus laid out the case, Collyer deferred to NSA declarant David Sherman’s judgment — without conducting a review of the document — that releasing the document would reveal details about the implementation of upstream surveillance.

Specifically, the release of the redacted information would disclose sensitive operational details associated with NSA’s “Upstream” collection capability. While certain information regarding NSA’s “Upstream” collection capability has been declassified and publicly disclosed, certain other information regarding the capability remains currently and properly classified. The redacted information would reveal specific details regarding the application and implementation of the “Upstream” collection capability that have not been publicly disclosed. Revealing the specific means and methodology by which certain types of SIGINT collections are accomplished could allow adversaries to develop countermeasures to frustrate NSA’s collection of information crucial to national security. Disclosure of this information could reasonably be expected to cause exceptionally grave damage to the national security.

[snip]

With respect to the FISC opinion withheld in full, it is my judgment that any information in the [Section 1809 Opinion] is classified in the context of this case because it can reasonably be expected to reveal classified national security information concerning particular intelligence methods, given the nature of the document and the information that has already been released. . . . In these circumstances, the disclosure of even seemingly mundane portions of this FISC opinion would reveal particular instances in which the “Upstream” collection program was used and could reasonably be expected to encourage sophisticated adversaries to adopt countermeasures that may deprive the United States of critical intelligence. [my emphasis]

Collyer found NSA had properly withheld the document as classified information the release of which would cause “grave damage to national security.”

Read more

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including Vice, Motherboard, the Nation, the Atlantic, Al Jazeera, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse in Grand Rapids, MI.