Posts

Center for Democracy and Technology’s James Dempsey on “the Wall,” Then and Now

Remember “the wall” that used to separate intelligence from criminal investigations and was used as an excuse for intelligence agencies not sharing intelligence they were permitted to share before 9/11?

It was demolished in 2001 — when the PATRIOT Act explicitly permitted what had been permitted before, sharing of intelligence information with the FBI — and 2002 — when the FISA Court of Review overruled presiding FISA Judge Royce Lamberth’s efforts to sustain some Fourth Amendment protections in criminal investigations using minimization procedures.

Nevertheless, the specter of a wall that didn’t prevent the Intelligence Committee from discovering 9/11 rising again is one of the things lying behind PCLOB’s weak recommendations on back door searches in its report on Section 702.

Of particular note, that’s what the Center for Democracy and Technology’s James Dempsey cites in his squishy middle ground recommendation on back door searches.

It is imperative not to re-erect the wall limiting discovery and use of information vital to the national security, and nothing in the Board’s recommendations would do so. The constitutionality of the Section 702 program is based on the premise that there are limits on the retention, use and dissemination of the communications of U.S. persons collected under the program. The proper mix of limitations that would keep the program within constitutional bounds and acceptable to the American public may vary from agency to agency and under different circumstances. The discussion of queries and uses at the FBI in this Report is based on our understanding of current practices associated with the FBI’s receipt and use of Section 702 data. The evolution of those practices may merit a different balancing. For now, the use or dissemination of Section 702 data by the FBI for non-national security matters is apparently largely, if not entirely, hypothetical. The possibility, however, should be addressed before the question arises in a moment of perceived urgency. Any number of possible structures would provide heightened protection of U.S. persons consistent with the imperative to discover and use critical national security information already in the hands of the government.546 

546 See Presidential Policy Directive — Signals Intelligence Activities, Policy Directive 28, 2014 WL 187435, § 2, (Jan. 17, 2014) (limiting the use of signals intelligence collected in bulk to certain enumerated purposes), available at http://www.whitehouse.gov/the-press-office/2014/01/17/presidential-policy-directive-signals-intelligence-activities.  [my emphasis]

Dempsey situates his comments in the context of the “wall.” He then suggests there are two possible uses of back door searches: “national security matters,” and non-national security matters, with the latter being entirely hypothetical, according to what the FBI self-reported to PCLOB.

Thus, he’s mostly thinking in terms of “possible structures [that] would provide heightened protection of US. persons,” to stave off future problems. He points to President Obama’s PPD-28 as one possibility as a model.

But PPD-28 is laughably inapt! Not only does the passage in question address “bulk collection,” which according to the definition Obama uses and PCLOB has adopted has nothing to do with Section 702. “[T]he Board does not regard Section 702 as a ‘bulk’ collection program,” PCLOB wrote at multiple points in its report.

More troubling, the passage in PPD-28 Dempsey cites permits bulk collection for the following uses:

(1) espionage and other threats and activities directed by foreign powers or their intelligence services against the United States and its interests;

(2) threats to the United States and its interests from terrorism;

(3) threats to the United States and its interests from the development, possession, proliferation, or use of weapons of mass destruction;

(4) cybersecurity threats;

(5) threats to U.S. or allied Armed Forces or other U.S or allied personnel;

(6) transnational criminal threats, including illicit finance and sanctions evasion related to the other purposes named in this section;

Ultimately, this represents — or should — an expansion of permissible use of Section 702 data, because its discussion of  terrorism and cybersecurity do not distinguish between those with an international nexus and those without. And the discussion of transnational crime might subject any petty drug dealer selling dope from Mexico to foreign intelligence treatment.

That this is what passes for the mushy middle on PCLOB is especially curious given that Dempsey was one of the first PCLOB member to express concern about back door searches. He did so in November’s Section 215 hearing, and even suggested limiting back door searches to foreign intelligence purposes (which is not the standard for FBI, in any case) was inadequate. Nevertheless, in last week’s report, he backed only very weak protections for back door searches, and did so within the context of national security versus non-national security, and not intelligence versus crime.

Now, I don’t mean to pick on Dempsey exclusively — I’ll have a few more posts on this issue. And to be clear, Dempsey does not represent CDT at PCLOB; he’s there in his private capacity.

But I raised his affiliation with CDT because in that capacity, Dempsey was part of an amicus brief, along with representatives from ACLU, Center for National Security Studies, EPIC, and EFF, submitted in the In Re Sealed Case in 2002, in which the FISA Court of Review reversed Lamberth and permitted prosecutor involvement in FISA warrants. That brief strongly rebuts the kind of argument he adopted in last week’s PCLOB report.

Read more

Riley Meets the Dragnet: Does “Inspection” amount to “Rummaging”?

It’s clear today’s decision in Riley v. California will be important in the criminal justice context. What’s less clear is its impact for national security dragnets.

To answer the question, though, we should remember that question really amounts to several. Does it affect the existing phone dragnet, which aspires to collect the phone records of every person in the US? Does it affect the government’s process of collecting massive amounts of data from which to cull an individual’s data to make up a “fingerprint” that can be used for targeting and other purposes? Will it affect the program the government plans to implement under USA Freedumber, in which the telecoms perform connection-based chaining for the NSA, and then return Call Detail Records as results? Does it affect Section 702? I think the answer may be different for each of these, though I think John Roberts’ language is dangerous for all of this.

In any case, Roberts wants it to be unclear. This footnote, especially, claims this opinion does not implicate cases — governed by the Third Party doctrine — where the collection of data is not considered a search.

1Because the United States and California agree that these cases involve searches incident to arrest, these cases do not implicate the question whether the collection or inspection of aggregated digital information amounts to a search under other circumstances.

Orin Kerr reads this as addressing the mosaic theory directly — which holds that a Fourth Amendment review must consider the entirety of the government collection — (and he is the expert, after all). Though I’m not impressed with his claim that the analogue language Roberts uses directly addresses the mosaic theory; Kerr seems to be arguing that because Roberts finds another argument unwieldy, he must be addressing the theory that Kerr himself finds unwieldy. Moreover, in addition to  this section, which Kerr says supports the Mosaic theory,

An Internet search and browsing history, for example, can be found on an Internet-enabled phone and could reveal an individual’s private interests or concerns—perhaps a search for certain symptoms of disease, coupled with frequent visits to WebMD. Data on a cell phone can also reveal where a person has been. Historic location information is a stand-ard feature on many smart phones and can reconstruct someone’s specific movements down to the minute, not only around town but also within a particular building. See United States v. Jones, 565 U. S. ___, ___ (2012) (SOTOMAYOR, J., concurring) (slip op., at 3) (“GPS monitoring generates a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.”).

I think the paragraph below it also supports the Mosaic theory — particularly its reference to a “revealing montage of the user’s life.”

Mobile application software on a cell phone, or “apps,” offer a range of tools for managing detailed information about all aspects of a person’s life. There are apps for Democratic Party news and Republican Party news; apps for alcohol, drug, and gambling addictions; apps for sharing prayer requests; apps for tracking pregnancy symptoms; apps for planning your budget; apps for every conceivable hobby or pastime; apps for improving your romantic life. There are popular apps for buying or selling just about anything, and the records of such transactions may be accessible on the phone indefinitely. There are over a million apps available in each of the two major app stores; the phrase “there’s an app for that” is now part of the popular lexicon. The average smart phone user has installed 33 apps, which together can form a revealing montage of the user’s life.

I’d argue that the opinion as a whole endorses the notion that you need to assess the totality of the surveillance in question. But then the footnote adopts the awkward phrase, “collection or inspection of aggregated digital information,” to suggest there may be some arrangement under which the conduct of such analysis might not constitute a search requiring a higher standard. (And all that still leaves the likely possibility that the government would scream “special need” and get an exception to get the data anyway; as they surely will do to justify ongoing border searches of computers.)

Of crucial importance, then, Roberts seems to be saying that it might be okay to conduct mosaic analysis, depending on where you get the data and/or whether you actually obtain or instead simply inspect the data.

That’s crucial, of course, because the government is, as we speak, replacing a phone dragnet in which it collects all the data from everyone and analyzes it (or rather, claims to only access only a minuscule portion of it, claiming to do so only through phone-based contacts) with one where it will go to “inspect” the data at telecoms.

So Roberts seems to have left himself an out (or included language designed to placate even Democrats like Stephen Breyer, to say nothing of Clarence Thomas, to achieve unanimity) that happens to line up nicely with where the phone dragnet, at least, is heading.

All that said, Robert’s caveat may not be broad enough to cover the new-and-improved phone dragnet as the government plans to implement it. After all, the “connection” based analysis the government intends to do may only survive via some kind of argument that letting telecoms serve as surrogate spooks makes this kosher under the Fourth Amendment. Because we have every reason to expect that the NSA intends to — at least — tie multiple online and telecom identities together to chain on all of them, and use cell location to track who you meet. And they may well (likely, if not now, then eventually) intend to use things like calendars and address books that Roberts argues makes cell phones not cell phones, but minicomputers that serve as “cameras,video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers.” Every single one of those minicomputer functions is a potential “connection” based chain.

So while the new-and-improved phone dragnet may fall under Roberts’ “inspect” language, it involves far more yoking of the many functions of cell phones that Roberts finds to be problematic.

Then there’s this passage, that Roberts used to deny the government the ability to “just” get call logs.

We also reject the United States’ final suggestion that officers should always be able to search a phone’s call log,as they did in Wurie’s case. The Government relies on Smith v. Maryland, 442 U. S. 735 (1979), which held that no warrant was required to use a pen register at telephone company premises to identify numbers dialed by a particular caller. The Court in that case, however, concluded that the use of a pen register was not a “search” at all under the Fourth Amendment. See id., at 745–746. There is no dispute here that the officers engaged in a search of Wurie’s cell phone. Moreover, call logs typically contain more than just phone numbers; they include any identifying information that an individual might add, such as the label “my house” in Wurie’s case. [my emphasis]

The first part of this passage makes a similar kind of distinction as you see in that footnote (and may support my suspicion that Roberts is trying to carve out space for the new-and-improved phone dragnet). Using a pen register at a telecom is not a search, because it doesn’t involve seizing the phone itself.

But the second part of this passage — which distinguishes between pen registers and call logs — seems to be the most direct assault on the Third Party doctrine in this opinion, because it suggests that data that has been enhanced by a user — phone numbers that are not just phone numbers — may not fall squarely under Smith v. Maryland.

And that’s important because the government intends to get far more data than phone numbers while at the telecoms under the new-and-improved phone dragnet. It surely at least aspires to get logs just like the one Roberts says the cops couldn’t get from Wurie.

Think, too, of how this should limit all the US person data the government collects overseas that the government then aggregates to make fingerprints, claiming incidentally collected data does not require any legal process. That data is seized not from telecoms but rather stolen off cables — does that count as public collection or seizure?

Perhaps the language that presents the most sweeping danger to the dragnet, however, is the line that both Kerr and I like best from the opinion.

Alternatively, the Government proposes that law enforcement agencies “develop protocols to address” concerns raised by cloud computing. Reply Brief in No. 13–212, pp. 14–15. Probably a good idea, but the Founders did not fight a revolution to gain the right to government agency protocols.

Admittedly, Roberts is addressing a specific issue, the government’s proposal of how to protect personal data stored on a cloud that might be accessed from a phone (as if the government gives a shit about such things!).

But the underlying principle is critical. For every single dragnet program the government conducts at NSA, it dismisses obvious Fourth Amendment concerns by pointing to minimization procedures.

The FISC allowed the government to conduct the phone dragnet because it had purportedly strict minimization procedures (which the government ignored); it allowed the government to conduct an Internet dragnet for the same reason; John Bates permitted the government to address domestic content collection he deemed a violation of the Fourth Amendment with new minimization procedures; and the 2008 FISCR opinion approving the Protect America Act (which FISCR and the government say covers FAA as well) relied on targeting and minimization procedures to judge it compliant with the Fourth Amendment. FISC is also increasingly using minimization procedures to deem other Section 215 collections compliant with the law, though we know almost nothing about what they’re collecting (though it’s almost certain they involve Mosaic collection).

Everything, everything, ev-er-y-thing the NSA does these days complies with the Fourth Amendment only under the theory that minimization procedures — “government agency protocols” — provide adequate protection under the Fourth Amendment.

It will take a lot of work, in cases in which the government will likely deny anyone has standing, with SCOTUS’ help, to make this argument. But John Roberts said today that the government agency protocols that have become the sole guardians of the Fourth Amendment are not actually what our Founders were thinking of.

Ultimately, though, this passage may be Roberts’ strongest condemnation — whether he means it or not — of the current dragnet.

Our cases have recognized that the Fourth Amendment was the founding generation’s response to the reviled “general warrants” and “writs of assistance” of the colonial era, which allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity. Opposition to such searches was in fact one of the driving forces behind the Revolution itself.

Roberts elsewhere says that cell searches are more intrusive than home searches. And by stealing and aggregating that data that originates on our cell phones, the government is indeed rummaging in unrestrained searches for evidence of criminal activity or dissidence. Roberts likely doesn’t imagine this language applies to the NSA (in part because NSA has downplayed what it is doing). But if anyone ever gets an opportunity to demonstrate all that NSA does to the Court, it will have to invent some hoops to deem it anything but digital rummaging.

I strongly suspect Roberts believes the government “inspects” rather than “rummages,” and so believes his opinion won’t affect the government’s ability to rummage, at least at the telecoms.  But a great deal of the language in this opinion raises big problems with the dragnets.

Why Doesn’t FISCR Have a Public Docket?

In the government’s arguments justifying the constitutionality of Section 702 the government has made fairly breathtaking claims that there is a foreign intelligence to the Constitution’s warrant requirement.

Which has gotten me wondering about the status of the FISA Court of Review ruling that Yahoo had to comply with Protect America Act orders. Back in July, we were promised a newly declassified review of that order, which makes a fairy sustained argument that PAA was legal under a special needs exception to the Fourth Amendment. But we haven’t gotten that order.

Which made me realize something: Although, months ago, the FISA Court established a public docket and even recently gave it a snazzy face lift, the FISA Court of Review does not yet have a docket.

So that Yahoo order declassification could be bubbling along and we’d never know about it, even in spite of the government’s claimed commitment to declassify the order.

If FISC can have a docket, why can’t FISCR?

The Triage Document

Accompanying a new story on GCHQ/NSA cooperation yesterday, the Intercept released one of the most revealing documents about NSA spying yet. It describes efforts to use Identifier Scoreboard to triage leads such that analysts spend manual time only with the most promising leads. Basically, the NSA aims to use this process to differentiate the 75% of metadata they collect that is interesting but not of high interest into different categories for further analysis.

It does so by checking the leads — which are identifiers like email addresses and phone numbers — against collected data (and this extends beyond just stuff collected on the wires; it includes captured media) to see what kind of contacts with existing targets there have been. Not only does the system pull up what prior contacts of interest exist, but also what time frame those occurred and in what number. From there, the analyst can link directly to either the collected knowledge about a target or the content.

Before I get into the significance, a few details.

First, the system works with both phone and Internet metadata. That’s not surprising, and it does not yet prove they’re chaining across platforms. But it is another piece of evidence supporting that conclusion.

More importantly, look at the authorities in question:

Screen shot 2014-05-01 at 10.46.51 AM

First, FAA. The CP and CT are almost certainly certificates, the authority to collect on counterproliferation and counterterrorism targets. But note what’s not there? Cybersecurity, the third known certificate (there was a third certificate reapproved in 2011, so it was active at this time). Which says they may be using that certificate differently (which might make sense, given that you’d be more interested in forensic flows, but this triage system is used with things like TAO which presumably include cyber targets).

There is, however, a second kind of FAA, “FG.” That may be upstream or it may be something else (FG could certainly stand for “Foreign Government, which would be consistent with a great deal of other data). If it’s something else, it supports the notion that there’s some quirk to how the government is using FAA that differs from what they’ve told PCLOB and the Presidential Review Group, which have both said there are just those 3 certificates.

Then there’s FAA 704/705B. This is collection on US person overseas. Note that FAA 703 (collection on US person who is located overseas but the collection on whom is in the US) is not included. Again, this shows something about how they use these authorities.

Finally, there are two EO12333s. In other slides, we’ve seen an EO12333 and an EO123333 SPCMA (which means you can collect and chain through Americans), and that may be what this is. Update: One other possibility is that this distinguishes between EO12333 data collected by the US and by second parties (the Five Eyes).

Now go to what happens when an identifier has had contact with a target — and remember, these identifiers are just random IDs at this point.

Screen shot 2014-05-01 at 10.49.50 AM

The triage program automatically pulls up prior contacts with targets. Realize what this is? It’s a backdoor search, conducted off an identifier about which the NSA has little knowledge.

And the triage provides a link directly from that the metadata describing when the contact occurred and who initiated it to the content.

When James Clapper and Theresa Shea describe the metadata serving as a kind of index that helps prioritize what content they read, this is part of what they’re referring to. That — for communications involving people who have already been targeted under whatever legal regime — the metadata leads directly to the content. (Note, this triage does not apparently include BR FISA or PRTT data — that is, metadata collected in the US — which says there are interim steps before such data will lead directly to content, though if that data can be replicated under EO 12333, as analysts are trained to do, it could more directly lead to this content.)

So they find the identifiers, search on prior contact with targets, then pull up that data, at least in the case of EO12333 data. (Another caution, these screens date from a period when NSA was just rolling out its back door search authorities for US persons, and there’s nothing here that indicates these were US persons, though it does make clear why — as last year’s audit shows — NSA has had numerous instances where they’ve done back door searches on US person identifiers they didn’t know were US person identifiers.)

Finally, look at the sources. The communications identified here all came off EO12333 communications (interestingly, this screen doesn’t ID whether we’re looking at EO12333_X or _S data). As was noted to me this morning, the SIGADS that are known here are offshore. But significantly, they include MUSCULAR, where NSA steals from Google overseas.

That is, this screen shows NSA matching metadata with metadata and content that they otherwise might get under FAA, legally, within the US. They’re identifying that as EO12333 data. EO12333 data, of course, gets little of the oversight that FAA does.

At the very least, this shows the NSA engaging in such tracking, including back door searches, off a bunch of US providers, yet identifying it as EO12333 collection.

Update: Two more things on this. Remember NSA has been trying, unsuccessfully, to replace its phone dragnet “alert” function since 2009 when the function was a big part of its violations (a process got approved in 2012, but the NSA has not been able to meet the terms of it technically, as of the last 215 order). This triage process is similar — a process to use with fairly nondescript identifiers to determine whether they’re worth more analysis. So we should assume that, while BR FISA (US collected phone dragnet) information is not yet involved in this, the NSA aspires to do so. There are a number of reasons to believe that moving to having the providers do the initial sort (as both the RuppRoge plan offered by the House Intelligence Committee and Obama’s plan do) would bring us closer to that point.

Finally, consider what this says about probable cause (especially if I’m correct that EO12333_S is the SPMCA that includes US persons). Underlying all this triage is a theory of what constitutes risk. It measures risk in terms of conversations –how often, how long, how many times — with “dangerous” people. While that may well be a fair measure in some cases, it may not be (I’ve suggested, for example, that people who don’t know they may be at risk are more likely to speak openly and at length, and those conversations then serve as a kind of camouflage for the truly interesting, rare by operational security conversations). But this theory (though not this particular tool) likely lies behind a lot of the young men who’ve been targeted by FBI.

The Verizon Publicity Stunt, Mosaic Theory, and Collective Fourth Amendment Rights

On Friday, I Con the Record revealed that a telecom — Ellen Nakashima confirms it was Verizon — asked the FISA Court to make sure its January 3 order authorizing the phone dragnet had considered Judge Richard Leon’s December 16 decision that it was unconstitutional. On March 20, Judge Rosemary Collyer issued an opinion upholding the program.

Rosemary Collyer’s plea for help

Ultimately, in an opinion that is less shitty than FISC’s previous attempts to make this argument, Collyer examines the US v. Jones decision at length and holds that Smith v. Maryland remains controlling, mostly because no majority has overturned it and SCOTUS has provided no real guidance as to how one might do so. (Her analysis raises some of the nuances I laid out here.)

The section of her opinion rejecting the “mosaic theory” that argues the cumulative effect of otherwise legal surveillance may constitute a search almost reads like a cry for help, for guidance in the face of the obvious fact that the dragnet is excessive and the precedent that says it remains legal.

A threshold question is which standard should govern; as discussed above, the court of appeals’ decision in Maynard and two concurrences in Jones suggest three different standards. See Kerr, “The Mosaic Theory of the Fourth Amendment,” 111 Mich. L. Rev. at 329. Another question is how to group Government actions in assessing whether the aggregate conduct constitutes a search.See id. For example, “[w]hich surveillance methods prompt a mosaic approach? Should courts group across surveillance methods? If so, how? Id. Still another question is how to analyze the reasonableness of mosaic searches, which “do not fit an obvious doctrinal box for determining reasonableness.” Id. Courts adopting a mosaic theory would also have to determine whether, and to what extent, the exclusionary rule applies: Does it “extend over all the mosaic or only the surveillance that crossed the line to trigger a search?”

[snip]

Any such overhaul of Fourth Amendment law is for the Supreme Court, rather than this Court, to initiate. While the concurring opinions in Jones may signal that some or even most of the Justices are ready to revisit certain settled Fourth Amendment principles, the decision in Jones itself breaks no new ground concerning the third-party disclosure doctrine generally or Smith specifically. The concurring opinions notwithstanding, Jones simply cannot be read as inviting the lower courts to rewrite Fourth Amendment law in this area.

As I read these passages, I imagined that Collyer was trying to do more than 1) point to how many problems overruling the dragnet would cause and 2) uphold the dignity of the rubber stamp FISC and its 36+ previous decisions the phone dragnet is legal.

There is reason to believe she knows what we don’t, at least not officially: that even within the scope of the phone dragnet, the dragnet is part of more comprehensive mosaic surveillance, because it correlates across platforms and identities. And all that’s before you consider how, once dumped into the corporate store and exposed to NSA’s “full range of analytic tradecraft,” innocent Americans might be fingerprinted to include our lifestyles.

That is, not only doesn’t Collyer see a way (because of legal boundary concerns about the dragnet generally, and possibly because of institutional concerns about FISC) to rule the dragnet illegal, but I suspect she sees the reverberations that such a ruling would have on the NSA’s larger project, which very much is about building mosaics of intelligence.

No wonder the government is keeping that August 20, 2008 opinion secret, if it indeed discusses the correlations function in the dragnet, because it may well affect whether the dragnet gets assessed as part of the mosaic NSA uses it as.

Verizon’s flaccid but public legal complaint

Now, you might think such language in Collyer’s opinion would invite Verizon to appeal this decision. But given this lukewarm effort, it seems unlikely to do so. Consider the following details:

Leon issued his decision December 16. Verizon did not ask the FISC for guidance (which makes sense because they are only permitted to challenge orders).

Verizon got a new Secondary Order after the January 3 reauthorization. It did not immediately challenge the order.

It only got around to doing so on January 22 (interestingly, a few days after ODNI exposed Verizon’s role in the phone dragnet a second time), and didn’t do several things — like asking for a hearing or challenging the legality of the dragnet under 50 USC 1861 as applied — that might reflect real concern about anything but the public appearance of legality. (Note, that timing is of particular interest, given that the very next day, on January 23, PCLOB would issue its report finding the dragnet did not adhere to Section 215 generally.)

Indeed, this challenge might not have generated a separate opinion if the government weren’t so boneheaded about secrecy.

Verizon’s petition is less a challenge of the program than an inquiry whether the FISC has considered Leon’s opinion.

It may well be the case that this Court, in issuing the January 3,2014 production order, has already considered and rejected the analysis contained in the Memorandum Order. [redacted] has not been provided with the Court’s underlying legal analysis, however, nor [redacted] been allowed access to such analysis previously, and the order [redacted] does not refer to any consideration given to Judge Leon’s Memorandum Opinion. In light of Judge Leon’s Opinion, it is appropriate [redacted] inquire directly of the Court into the legal basis for the January 3, 2014 production order,

As it turns out, Judge Thomas Hogan (who will take over the thankless presiding judge position from Reggie Walton next month) did consider Leon’s opinion in his January 3 order, as he noted in a footnote.

Screen Shot 2014-04-28 at 10.49.42 AM

And that’s about all the government said in its response to the petition (see paragraph 3): that Hogan considered it so the FISC should just affirm it.

Verizon didn’t know that Hogan had considered the opinion, of course, because it never gets Primary Orders (as it makes clear in its petition) and so is not permitted to know the legal logic behind the dragnet unless it asks nicely, which is all this amounted to at first.

Note that the government issued its response (as set by Collyer’s scheduling order) on February 12, the same day it released Hogan’s order and its own successful motion to amend it. So ultimately this headache arose, in part, because of the secrecy with which it treats even its most important corporate spying partners, which only learn about these legal arguments on the same schedule as the rest of us peons.

Yet in spite of the government’s effort to dismiss the issue by referencing Hogan’s footnote, Collyer said because Verizon submitted a petition, “the undersigned Judge must consider the issue anew.” Whether or not she was really required to or could have just pointed to the footnote that had been made public, I don’t know. But that is how we got this new opinion.

Finally, note that Collyer made the decision to unseal this opinion on her own. Just as interesting, while neither side objected to doing so, Verizon specifically suggested the opinion could be released with no redactions, meaning its name would appear unredacted.

The government contends that certain information in these Court records (most notably, Petitioner’s identity as the recipient of the challenged production order) is classified and should remain redacted in versions of the documents that are released to the public. See Gov’t Mem. at 1. Petitioner, on the other hand, “request[s] no redactions should the Court decide to unseal and publish the specified documents.” Pet. Mem. at 5. Petitioner states that its petition “is based entirely on an assessment of [its] own equities” and not on “the potential national security effects of publication,” which it “is in no position to evaluate.” Id.

I’ll return to this. But understand that Verizon wanted this opinion — as well as its own request for it — public.

Read more

Back Door Searches: One of Two Replacements for the Internet Dragnet?

I said the other day, most of NSA’s Civil Liberties and Privacy Office comment to the Privacy and Civil Liberties Oversight Board on Section 702 was disappointing boilerplate, less descriptive than numerous other statements already in the public record.

In the passage on back door searches I looked at, however, there was one new detail that is very suggestive. It said NSA does more back door searches on metadata than on content under Section 702.

NSA distinguishes between queries of communications content and communications metadata. NSA analysts must provide justification and receive additional approval before a content query using a U.S. person identifier can occur. To date, NSA analysts have queried Section 702 content with U.S. person identifiers less frequently than Section 702 metadata.

Consider what this means. NSA collects content from a selector — say, all the Hotmail communications of ScaryAQAPTerrorist. That content of course includes metadata (setting aside the question of whether this is legally metadata or content for the moment): the emails and IPs of people who were in communication with that scary terrorist.

The NSA is saying that the greater part of their back door searches on US person identifiers — say, searching on the email, “[email protected]” — is just for metadata.

Given the timing, it seems that they’re using back door searches as one of two known replacements for the PRTT Internet dragnet shut down around October 30, 2009, turned on again between July and October 2010, then shut down for good in 2011 (the other being the SPCMA contact chaining of EO 12333 collected data through US person identifiers).

Recall that NSA and CIA first asked for these back door searches in April 2011. That was somewhere between 6 to 9 months after John Bates had permitted NSA to turn the Internet dragnet back on in 2010 under sharply restricted terms. NSA was still implementing their rules for using back door searches in early 2012, just months after NSA had shut down the (domestic) Internet dragnet once and for all.

And then NSA started using 702 collection for a very similar function: to identify whether suspicious identifiers were in contact with known suspicious people.

There are many parts of this practice that are far preferable to the old Internet dragnet.

For starters, it has the benefit of being legal, which the Internet dragnet never was!

Congress and the FISC have authorized NSA to collect this data from the actual service providers targeting on overseas targets. Rather than collecting content-as-metadata from the telecoms — which no matter how hard they tried, NSA couldn’t make both legal and effective — NSA collected the data from Yahoo and Microsoft and Google. Since the data was collected as content, it solves the content-as-metadata problem.

And this approach should limit the number of innocent Americans whose records are implicated. While everyone in contact with ScaryAQAPTerrorist will potentially be identified via a backdoor search, that’s still less intrusive than having every Americans’ contacts collected (though if we can believe the NSA’s public statements, the Internet dragnet always collected on fewer people than the phone dragnet).

That said, the fact that the NSA is presumably using this as a replacement may lead it to task on much broader selectors than they otherwise might have: all of Yemen, perhaps, rather than just certain provinces, which would have largely the same effect as the old Internet dragnet did.

In addition, this seems to reverse the structure of the old dragnet (or rather, replicate some of the problems of the alert system that set off the phone dragnet problems in 2009). It seems an analyst might test a US person identifier — remember, the analyst doesn’t even need reasonable articulable suspicion to do a back door search — against the collected metadata of scary terrorist types, to see if the US person is a baddie. And I bet you a quarter this is automated, so that identifiers that come up in, say, a phone dragnet search are then run against all the baddies to see if they also email at the press of a button. And at that point, you’re just one more internal approval step away from getting the US person content.

In short, this would seem to encourage a kind of wild goose chase, to use Internet metadata of overseas contact to judge whether a particular American is suspicious. These searches have a far lower standard than the phone and Internet dragnets did (as far as we know, neither the original collection nor the back door search ever require an assertion of RAS). And the FISC is far less involved; John Bates has admitted he doesn’t know how or how often NSA is using this.

But it is, as far as we know, legal.

Were the 58-61,000 Internet Targets Part of NSA’s 73,000 Targets?

As I noted, Google, Yahoo, and Microsoft all released transparency reports today.

During the second half of 2012, Microsoft had FISA requests affecting 16,000-16,999 accounts, Google had 12,000 – 12,999.  We don’t have Yahoo’s numbers for that period, but for the following six month period they had requests affecting 30,000 – 30,999 accounts; given that numbers for the other two providers dropped during this six month period, it’s likely Yahoo’s did too, so the 30,000 is conservative for the earlier period. So the range for the big 3 email providers in that period is likely around 58,000 – 60,997. [Update: Adding FaceBook would bring it to 62,000 – 64,996. h/t CNet]

I’d like to compare what they report with what this report on FISA Amendments Act compliance shows. I think pages 23 through 26 of the report show that NSA had an average of 73,103 selectors selected via NSA targeting on any given day during the period from June 1, 2012 to November 30, 2012. That’s because the notification delays from the period (212 — see page 26) should be .29% of the average daily selectors (see amount on 23 less amount without the notification delays on page 34).

But remember: these are not the same measurement. The government report number is based on average daily selectors, so it reflects the total of selectors tasked on any given day. Whereas the providers are (I think the numbers must therefore show) the total number of customer selectors affected across the entire 6-month period, and they almost certainly weren’t all tasked across the entire 6 month period (though some surely were).

There’s one possible (gigantic) flaw in this logic. The discussion of the FBI targeting is largely redacted in the government memo. And there have been hints — pretty significant ones — that the FBI takes the lead with the PRISM providers. if so, these numbers are totally unrelated.

Also remember, there are at least two other kinds of 702 targeting: the upstream collection that makes up about 9% of the volume of 702 collection, and phone collection, which is going up again.

This would sure be a lot easier if the government actually backed its claims to transparency.

Is Google Sharing 9,500 Users’ Data, or 65,000?

Screen Shot 2014-02-03 at 2.20.17 PM

Google just released its shiny new transparency numbers reflecting DOJ’s new transparency rules.

While they tell us some interesting things, the numbers show how many questions the transparency system raises. I’ve raised the questions below, linked to my discussion by bolded number.

[NSA presentation, PRISM collection dates, via Washington Post]

Google is using option 1 (perhaps because they had already reported their NSL numbers), in which they break out NSLs separately from FISA orders, but must report in bands of 1000.

Note that Google starts this timeline in 2009, whereas their criminal process numbers pertaining to user accounts only start in 2011. Either because they had these FISA numbers ready at hand, or because they made the effort to go back and get them (whereas they haven’t done the same for pre-2011 criminal process numbers), they’re giving us more history on their FISA orders than they did on criminal process. They probably did this to show the entire period during which they’ve been involved in PRISM, which started on January 14, 2009.

Google gets relatively few non-content requests, and the number — which could be zero! — has not risen appreciably since they got involved in PRISM.(1) (I suspect we’re going to see fairly high non-content requests from Microsoft, because they pushed to break these two categories out).

Read more

The New Transparency Guidelines

DOJ and the tech companies just came to a deal on new transparency reporting. (h/t Mike Scarcella) It is a big improvement over what the government offered last year which was:

Option One: Provide total number of requests (criminal NSL, FISA) and total number of accounts targeted, broken out by 1000s

Option Two: Provide exact number of criminal requests and accounts affected, and number of NSLs received and accounts affected, broken out by 1000s, without providing any numbers on FISC service

This approach basically permitted the government to hide the FISC surveillance, by ensuring it only ever appeared lumped into the larger universe of criminal requests, along with other bulk requests. In addition, it didn’t let providers say whether they were mostly handing over metadata (NSLs would be limited to metadata, though FISC requests might include both metadata and content) or content in a national security context.

The new solution is:

Option One: Biannual production, with a 6-month delay on FISC reporting

  1. Criminal process, subject to no restrictions
  2. NSLs and the number of customer accounts affected by NSLs, reported in bands of 1000, starting at 0-999
  3. FISA orders for content and the number of customer selectors targeted, both reported in bands of 1000, starting at 0-999
  4. FISA orders for non-content and the number of customer selectors targeted, both reported in bands of 1000, starting at 0-999*

This option subjects a two-year delay on new (internally developed or purchased) platforms, products, or services. So for example, if Google started to get Nest orders today, Google couldn’t include it in their reporting until 2 years from now.

Option Two:

  1. Criminal process, subject to no restrictions
  2. Total national security process, including NSLs and FISA lumped together, reported in bands of 250, starting at 0-250
  3. Total customer selectors targeted under all national security requests, reported in bands of 250, starting at 0-250

* The order has a footnote basically saying the government hasn’t ceded the issue of reporting on the phone dragnet yet (though only tech companies were parties to this, and their only telecom production would be VOIP).

So my thoughts:

First, you can sort of see what the government really wants to hide with these schemes. They don’t want you to know if they submit a single NSL or 215 order affecting 1000 customers, which it’s possible might appear without the bands.They don’t want you to see if there’s a provider getting almost no requests (which would be hidden by the initial bands).

And obviously, they don’t want you to know when they bring new capabilities online, in the way they didn’t want users to know they had broken Skype. Though at this point, what kind of half-assed terrorist wouldn’t just assume the NSA has everything?

I think the biggest shell game might arise from the distinction between account (say, my entire Google identity) and selector (my various GMail email addresses, Blogger ID, etc). By permitting reporting on selectors, not users, this could obscure whether a report affects 30 identities of one customer or the accounts of 30 customers. Further, there’s a lot we still don’t know about what FISC might consider a selector (they have, in the past, considered entire telecom switches to be).

But it will begin to give us an outline of how often they’re using NatSec process as opposed to criminal process, which providers are getting primarily NSL orders and which are getting potentially more exotic FISC orders. Further, it will tell us more about what the government gets through the PRISM program, particularly with regard to metadata versus content.

Update: Apple’s right out of the gate with their report of fewer than 250 orders affecting fewer than 250 “accounts,” which doesn’t seem how they’re supposed to report using that option.

Update: Remember, Verizon issued a transparency report itself, just 5 days ago. Reporting under these new guidelines wouldn’t help them much as the government has bracketed whether it could release phone dragnet information. Moreover, Verizon is almost certainly one of the telecoms that provide upstream content; that would likely show up as just one selector, but it’s not clear how it gets reported.

3 Certifications — Terror, Proliferation, and Cyber — and Stealing from Google

Screen shot 2013-12-19 at 7.10.00 AMFor months, I have been suggesting that the government only uses Section 702 of FISA, under which it collects data directly from US Internet providers and conducts some upstream content from telecom providers, for three purposes:

  • Counterterrorism
  • Counterproliferation
  • Cyber

I have said so based on two things: many points in documents — such as the second page from John Bates’ October 3, 2011 opinion on 702, above — make it clear there are 3 sets of certifications for 702 collection. And other explainer documents released by the government talk about those three topics (though they always stop short of saying the government collects on only those 3 topics).

The NSA Review Group report released yesterday continues this pattern in perhaps more explicit form.

[S]ection 702 authorized the FISC to approve annual certifications submitted by the Attorney General and the Director of National Intelligence (DNI) that identify certain categories of foreign intelligence targets whose communications may be collected, subject to FISC-approved targeting and minimization procedures. The categories of targets specified by these certifications typically consist of, for example, international terrorists and individuals involved in the proliferation of weapons of mass destruction.

If I’m right, it explains one of the issues driving overseas collection and, almost certainly, rising tensions with the Internet companies.

I suggested, for example, that this might explain why NSA felt the need to steal data from Google’s own fiber overseas.

I wonder whether the types of targets they’re pursuing have anything to do with this. For a variety of reasons, I’ve come to suspect NSA only uses Section 702 for three kinds of targets.

  • Terrorists
  • Arms proliferators
  • Hackers and other cyber-attackers

According to the plain letter of Section 702 there shouldn’t be this limitation; Section 702 should be available for any foreign intelligence purpose. But it’s possible that some of the FISC rulings — perhaps even the 2007-8 one pertaining to Yahoo (which the government is in the process of declassifying as we speak) — rely on a special needs exception to the Fourth Amendment tied to these three types of threats (with the assumption being that other foreign intelligence targets don’t infiltrate the US like these do).

Which would make this passage one of the most revealing of the WaPo piece.

One weekly report on MUSCULAR says the British operators of the site allow the NSA to contribute 100,000 “selectors,” or search terms. That is more than twice the number in use in the PRISM program, but even 100,000 cannot easily account for the millions of records that are said to be sent back to Fort Meade each day.

Given that NSA is using twice as many selectors, it is likely the NSA is searching on content outside whatever parameters that FISC sets for it, perhaps on completely unrelated topics altogether. This may well be foreign intelligence, but it may not be content the FISC has deemed worthy of this kind of intrusive search.

That is, if NSA can only collect 3 topics domestically, but has other collection requirements it must fulfill — such as financial intelligence on whether the economy is going to crash, which FISC would have very good reasons not to approve as a special need for US collection — then they might collect it overseas (and in the Google case, they do it with the help of GCHQ). But as Google moved to encryption by default, NSA would have been forced to find new ways to collect it.

Which might explain why they found a way to steal data in motion (on Google’s cables, though).

Here’s the thing, though. As I’ll note in a piece coming out later today, the Review also emphasizes that EO 12333 should only be available for collection not covered by FISA. With Section 702, FISA covers all collection from US Internet providers. So FISC’s refusal to approve (or DOJ’s reluctance to ask for approval) to collect on other topics should foreclose that collection entirely. The government should not be able to collect some topics under 702 here, then steal on other topics overseas.

But it appears that’s what it’s doing.

Read more