EO 12333

1 2 3 7

FISCR Used an Outdated Version of EO 12333 to Rule Protect America Act Legal

If the documents relating to Yahoo’s challenge of Protect America Act released last month are accurate reflections of the documents actually submitted to the FISC and FISCR, then the government submitted a misleading document on June 5, 2008 that was central to FISCR’s ultimate ruling.

As I laid out here in 2009, FISCR relied on the the requirement  in EO 12333 that the Attorney General determine there is probable cause a wiretapping technique used in the US is directed against a foreign power to judge the Protect America Act met probable cause requirements.

The procedures incorporated through section 2.5 of Executive Order 12333, made applicable to the surveillances through the certifications and directives, serve to allay the probable cause concern.

The Attorney General hereby is delegated the power to approve the use for intelligence purposes, within the United States or against a United States person abroad, of any technique for which a warrant would be required if undertaken for law enforcement purposes, provided that such techniques shall not be undertaken unless the Attorney General has determined in each case that there is probable cause to believe that the technique is directed against a foreign power or an agent of a foreign power.

44 Fed. Reg. at 59,951 (emphasis supplied). Thus, in order for the government to act upon the certifications, the AG first had to make a determination that probable cause existed to believe that the targeted person is a foreign power or an agent of a foreign power. Moreover, this determination was not made in a vacuum. The AG’s decision was informed by the contents of an application made pursuant to Department of Defense (DOD) regulations. See DOD, Procedures Governing the Activities of DOD Intelligence Components that Affect United States Persons, DOD 5240.1-R, Proc. 5, Pt. 2.C.  (Dec. 1982).

Yahoo didn’t buy this argument. It had a number of problems with it, notably that nothing prevented the government from changing Executive Orders.

While Executive Order 12333 (if not repealed), provides some additional protections, it is still not enough.


Thus, to the extent that it is even appropriate to examine the protections in the Executive Order that are not statutorily required, the scales of the reasonableness determination sway but do not tip towards reasonableness.

Yahoo made that argument on May 29, 2008.

Sadly, Yahoo appears not to have noticed the best argument that Courts shouldn’t rely on EO 12333 because the President could always change it: Sheldon Whitehouse’s revelation on December 7, 2007 (right in the middle of this litigation) that OLC had ruled the President could change it in secret and not note the change publicly. Whitehouse strongly suggested that the Executive in fact had changed EO 12333 without notice to accommodate its illegal wiretap program.

But the government appears to have intentionally withheld further evidence about how easily it could change EO 12333 — and in fact had, right in the middle of the litigation.

This is the copy of the Classified Annex to EO 12333 that (at least according to the ODNI release) the government submitted to FISCR in a classified appendix on June 5, 2008 (that is, after Yahoo had already argued that an EO, and the protections it affords, might change). It is a copy of the original Classified Appendix signed by Ed Meese in 1988.

As I have shown, Michael Hayden modified NSA/CSS Policy 1-23 on March 11, 2004, which includes and incorporates EO 12333, the day after the hospital confrontation. The content of the Classified Annex released in 2013 appears to be identical, in its unredacted bits, to the original as released in 1988 (see below for a list of the different things redacted in each version). So the actual content of what the government presented may (or may not be) a faithful representation of the Classified Appendix as it currently existed.

But the version of NSA/CSS Policy 1-23 released last year (starting at page 110) provides this modification history:

This Policy 1-23 supersedes Directive 10-30, dated 20 September 1990, and Change One thereto, dated June 1998. The Associate Director for Policy endorsed an administrative update, effective 27 December 2007 to make minor adjustments to this policy. This 29 May 2009 administrative update includes changes due to the FISA Amendments Act of 2008 and in core training requirements.

That is, Michael Hayden’s March 11, 2004 modification of the Policy changed to the Directive as existed before 2 changes made under Clinton.

Just as importantly, the modification history reflects “an administrative update” making “minor adjustments to this policy” effective December 27, 2007 — a month and a half after this challenge started.

By presenting the original Classified Appendix — to which Hayden had apparently reverted in 2004 — rather than the up-to-date Policy, the government was presenting what they were currently using. But they hid the fact that they had made changes to it right in the middle of this litigation. A fact that would have made it clear that Courts can’t rely on Executive Orders to protect the rights of Americans, especially when they include Classified Annexes hidden within Procedures.

In its language relying on EO 12333, FISCR specifically pointed to DOD 5240.1-R. The Classified Annex to EO 12333 is required under compliance with part of that that complies with the August 27, 2007 PAA compliance.

That is, this Classified Annex is a part of the Russian dolls of interlocking directives and orders that implement EO 12333.

And they were changing, even as this litigation was moving forward.

Only, the government appears to have hidden that information from the FISCR.

Update: Clarified that NSA/CSS Policy 1-23 is what got changed.

Update: Hahaha. The copy of DOD 5240.1 R which the government submitted on December 11, 2007, still bears the cover sheet labeling it as an Annex to NSA/CSS Directive 10-30. Which of course had been superseded in 2004.

Note how they cut off the date to hide that it was 1990?

Note how they cut off the date to hide that it was 1990?

Continue reading

Why Isn’t FBI Investigating the Hackers Who Broke into Google’s Cables?

At his Brookings event yesterday, Jim Comey claimed that there is a misperception, in the wake of the Snowden releases, about how much data the government obtains.

In the wake of the Snowden disclosures, the prevailing view is that the government is sweeping up all of our communications. That is not true. And unfortunately, the idea that the government has access to all communications at all times has extended—unfairly—to the investigations of law enforcement agencies that obtain individual warrants, approved by judges, to intercept the communications of suspected criminals.


It frustrates me, because I want people to understand that law enforcement needs to be able to access communications and information to bring people to justice. We do so pursuant to the rule of law, with clear guidance and strict oversight. 

He goes onto pretend that Apple and Google are default encrypting their phone solely as a marketing gimmick, some arbitrary thing crazy users want.

Both companies are run by good people, responding to what they perceive is a market demand. But the place they are leading us is one we shouldn’t go to without careful thought and debate as a country.


Encryption isn’t just a technical feature; it’s a marketing pitch. But it will have very serious consequences for law enforcement and national security agencies at all levels. Sophisticated criminals will come to count on these means of evading detection. It’s the equivalent of a closet that can’t be opened. A safe that can’t be cracked. And my question is, at what cost?

He ends with a plea that “our private sector partners … consider changing course.”

But we have to find a way to help these companies understand what we need, why we need it, and how they can help, while still protecting privacy rights and providing network security and innovation. We need our private sector partners to take a step back, to pause, and to consider changing course.

There’s something missing from Comey’s tale.

An explanation of why the FBI has not pursued the sophisticated criminals who stole Google’s data overseas.

At a recent event with Ron Wyden, the Senator asked Schmidt to weigh in on the phone encryption “kerfuffle.” And Schmidt was quite clear: the reason Google and Apple are doing this is because the NSA’s partners in the UK stole their data, even while they had access to it via PRISM.

The people who are criticizing this should have expected this. After Google was attacked by the British version of the NSA, we were annoyed and so we put end-to-end encryption at rest, as well as through our systems, making it essentially impossible for interlopers — of any kind — to get that information.

Schmidt describes the default encryption on the iPhone, notes that it has been available for the last 3 years on Android phones, and will soon be standard, just like it is on iPhone.

Law enforcement has many many ways of getting information that they need to provide this without having to do it without court orders and with the possible snooping conversation. The problem when they do it randomly as opposed to through a judicial process is it erodes user trust.

If everything Comey said were true, if this were only about law enforcement getting data with warrants, Apple – and Google especially – might not have offered their customers the privacy they deserved. But it turns out Comey’s fellow intelligence agency decided to just go take what they wanted.

And FBI did nothing to solve that terrific hack and theft of data.

I guess FBI isn’t as interested in rule of law as Comey says.

I Con the Record’s International Privacy Guidelines Swallowed Up by Exceptions

Screen Shot 2014-10-17 at 11.23.58 AMSometimes I Con the Record outdoes itself.

On Tuesday, the Guardian noted a scathing report UN Counterterrorism special rapporteur Ben Emmerson issued last month attacking British and US collection of bulk communications.

“Merely to assert – without particularization – that mass surveillance technology can contribute to the suppression and prosecution of acts of terrorism does not provide an adequate human rights law justification for its use. The fact that something is technically feasible, and that it may sometimes yield useful intelligence, does not by itself mean that it is either reasonable or lawful.”


“It is incompatible with existing concepts of privacy for states to collect all communications or metadata all the time indiscriminately. The very essence of the right to the privacy of communication is that infringements must be exceptional, and justified on a case-by-case basis.”

Today, I Con the Record released a “Status Report” on an initiative President Obama ordered in his PPD-28 back in January to extend privacy protections to foreigners.

As we work to meet the January 2015 deadline, PPD-28 called on the Director of National Intelligence to prepare an interim report on the status of our efforts and to evaluate, in coordination with the Department of Justice and the rest of the Intelligence Community, additional retention and dissemination safeguards.

The DNI’s interim report is now being made available to the public in line with our pledge to share as much information about sensitive intelligence activities as is possible, consistent with our national security.

One thing this interim report requires is that “elements shall publicly release their PPD-28 implementation policies and procedures to the maximum extent possible.” Which requirement, you might assume, this release fulfills.

Which is why it’s so curious I Con the Record chose not to release an unclassified report mandated and mandating transparency — dated July 2014 — until October 2014.

Lest I be called a cynic, let me acknowledge that there are key parts of this that may represent improvements (or may not). The report asserts:

  • Foreigners will be treated with procedures akin to — though not identical to — those imposed by Section 2.3 of EO 12333
  • Just because someone is a foreigner doesn’t mean their information is foreign intelligence; the IC should “permanently retain or disseminate such personal information only if the personal information relates to an authorized intelligence requirement, is reasonably believed to be evidence of a crime, or meets one of the other standards for retention or dissemination identified in section 2.3″ of EO 12333
  • The IC should consider adopting (though is not required to) retention periods used with US person data for foreign personal information (which is 5 years); the IC may get extensions, but only in 5-year chunks of time
  • When disseminating “unevaluated personal information,” the IC should make that clear so the recipient can protect it as such

Those are good things! Yeah us!

There are, however, a series of exceptions to these rules.

First, the guidelines in this report restate PPD-28′s unbelievably broad approval of the use of bulk data, in full. The report does include this language:

[T]he procedures must also reflect the limitations on the use of SIGINT collected in bulk. Moreover, Intelligence Community element procedures should include safeguards to satisfy the requirements of this section. In developing procedures to comply with this requirement, the Intelligence Community must be mindful that to make full use of intelligence information, an Intelligence Community element may need to use SIGINT collected in bulk together with other lawfully collected information. In such situations, Intelligence Community elements should take care to comply with the limitations applicable to the use of bulk SIGINT collection.

Unless I’m missing something, the only “limits” in this section are those limiting the use of bulk collection to almost all of NSA’s targets, including counterterrorism, cybersecurity, and crime, among other things. Thus, the passage not only reaffirms what amounts to a broad permission to use bulk, but then attaches those weaker handing rules to anything used in conjunction with bulk.

Then there are the other exceptions. The privacy rules in this document don’t apply to:

  • Evaluated intelligence (exempting foreigners’ data from the most important treatment US person data gets, minimization in finished intelligence reports; see footnote 3)
  • Personal information collected via other means than SIGINT (excluding most of what the CIA and FBI do, for example; see page 1)
  • Information collected via SIGINT not collecting communications or information about communications (seemingly excluding things like financial dragnets and pictures and potentially even geolocation, among a great many other things; see footnote 2)

And, if these procedures aren’t loosey goosey enough for you, the report includes this language:

It is important that elements have the ability to deviate from their procedures when national security requires doing so, but only with approval at a senior level within the Intelligence Community element and notice to the DNI and the Attorney General.

OK then.

Congratulations world! We’re going to treat you like Americans. Except in the majority of situations when we’ve decided not to grant you that treatment. Rest easy, though, knowing you’re data is sitting in a database for only 5 years, if we feel like following that rule.

Richard Burr Prepares to Capitalize on Refusing to Exercise Intelligence Oversight

In James Risen’s new book, he provides new details on what happened to the NSA whistleblowers — Bill Binney, Kurt Wiebe, Ed Loomis, Thomas Drake — who tried to stop President Bush’s illegal wiretap program, adding to what Jane Mayer wrote in 2011. He pays particular attention to the effort Diane Roark made, as a staffer overseeing NSA on the House Intelligence Committee, to alert people that the Agency was conducting illegal spying on Americans.

As part of that, Risen describes an effort Roark made to inform another Congressman of the program, one who had not been briefed: Richard Burr.

Despite the warning from (HPSCI’s Republican Staff Director Tim) Sample not to talk with anyone else on the committee about the program, she privately warned Chris Barton, the committee’s new general counsel, that “there was an NSA program of questionable legality and that it was going to blow up in their faces.” In early 2002, Roark also quietly arranged a meeting between Binney, Loomis, and Wiebe and Richard Burr, a North  Carolina Republican on the House Intelligence Committee. Binney told Burr everything they had learned about the NSA wiretapping program, but Burr hardly said a word in response. Burr never followed up on the matter with Roark, and there is no evidence he ever took any action to investigate the NSA program.

I’m not actually surprised that Burr learned the Intelligence Community was engaging in illegal behavior and did nothing. From what we’ve seen in his response to torture, he has served entirely to help CIA cover up the program and protect the torturers. Indeed, in his treatment of John Brennan’s confirmation, he made efforts to ensure Brennan would have to protect the torturers too.

So it’s no surprise that Burr heard details of an illegal program and ignored them.

Still, it’s worth highlighting this detail because, if Democrats do lose the Senate as they are likely to do in November, Richard Burr will most likely become Senate Intelligence Committee Chair. While Dianne Feinstein may be a badly flawed Chair overseeing the IC, Burr will be a nightmare, unloosing them to do whatever they’re ordered.

That’s the kind of career advancement that comes to a guy who remains silent about wrongdoing.

A Remarkable Date for the Virgin Birth of the Silk Road Investigation

As Wired first reported, there’s been an interesting exchange in the Silk Road prosecution. In September, the former FBI Agent who helped to bust accused Silk Road operator Ross Ulbricht, Christopher Tarbell, submitted a declaration explaining the genesis of the investigation by claiming the FBI got access to the Silk Road server because it became accessible via a non-Tor browser. In response, Ulbricht lawyer Joshua Horowitz submitted a declaration claiming Tarbell’s claims were implausible because the FBI wouldn’t have been able to get into Silk Road’s back end. The government responded by claiming that even if it did hack the website, it would not have been illegal.

Given that the SR Server was hosting a blatantly criminal website, it would have been reasonable for the FBI to “hack” into it in order to search it, as anysuch “hack” would simply have constituted a search of foreign property known to contain criminal evidence, for which a warrant was not necessary .

On Friday, Judge Katherine Forrest rejected Ulbricht’s efforts to throw out the evidence from the alleged hack, accepting the government’s argument that Ulbricht had no expectation of privacy on that server regardless of when and how the government accessed it.

The temporal problems with the government’s story

Most of the coverage on this exchange has focused on the technical claims. But just as interesting are the temporal claims. Horowitz summarizes that problem this way:

[S]everal critical files provided in discovery contain modification dates predating the first date Agent Tarbell claims Icelandic authorities imaged the Silk Road Server, thereby casting serious doubt on the chronology and methodology of his account;

The government claims that server was first imaged on July 23,2013.

As I’ll lay out below, Horowitz and Tarbell provide a lot of details suggesting something — perhaps the imaging of the server, perhaps something more – happened six weeks earlier.

But before we get there, consider the date: June 6, 2013.

June 6, 2013 was the day after the afternoon publication of the first Snowden leak, and the day before the Guardian made it clear their leak included cyberwar materials.

That is, the FBI claims to have officially “found” the Silk Road server at the same time the Snowden leaks started, even while they date their investigation to 6 weeks later.

The June 6 materials

FBI’s Tarbell is much vaguer about this timing than Ulbricht’s team is. As Tarbell tells it, on some unknown date in early June 2013, he and a colleague were sniffing Silk Road data when they discovered an IP not known to be tied to Tor.

In or about early June 2013, another member of CY-2 and I closely examined the traffic data being sent from the Silk Road website when we entered responses to the prompts contained in the Silk Road login interface.

That led them to look further, according to Tarbell. When he typed the IP into a non-Tor browser, he discovered it was leaking.

When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was “leaking” from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.

That led the government to ask Iceland, on June 12, to image the server. Iceland didn’t do so, according to the official narrative, until the next month.

The defense doesn’t buy this — in part, because Tarbell claims he didn’t adhere to forensics standard procedure by keeping copies of his packet sniffing.

Failure to preserve packet logs recorded while investigating the Silk Road servers would defy the most basic principles of forensic investigative techniques.


[T]he government’s position is that former SA Tarbell conducted his investigation of Silk Road, and penetrated the Silk Road Server, without documenting his work in any way.

According to the government, the only record of Tarbell’s access to the server from this period is from access logs dated June 11.

[A]n excerpt of 19 lines from Nginx access logs, attached hereto as Exhibit 5, supposedly showing law enforcement access to the .49 server from a non-Tor IP address June 11, 2013, between 16:58:36 and 17:00:40. According to the Government, this is the only contemporaneous record of the actions described by the Tarbell Declaration at ¶¶ 7-8.9

Given that this bears a particular date, I find it all the more curious that Tarbell doesn’t date when he was doing the packet sniffing.

There are a number of other details that point back to that June 6 date. Perhaps most significant is that Iceland imaged a server Silk Road had earlier been using on June 6.

There are a total of 4 tarballs in the first item of discovery: home, var, all, and orange21 – all contained in .tar.gz files. The mtime for orange21.tar.gz is consistent with the July 23, 2013 image date. However, the other 3 tarballs have an mtime of June 6, 2013, as shown below22:

  • root 30720 Jun 6 2013 home.tar.gz
  • root 737095680 Jun 6 2013 var.tar.gz
  • root 1728276480 Jun 6 2013 all.tar.gz
  • root 22360048285 Jul 23 2013 orange21.tar.gz

The modification date of the tarballs is consistent with an imaging date of June 6, 2013, a full six weeks before the July 23, 2013, imaging of the .49 Server, a fact never mentioned in the Tarbell Declaration.

Though — as the defense points out — Tarbell didn’t mention that earlier imaging. He notes an earlier “lead” on the Silk Road server that resolved by May, and he notes that after Ulbricht’s arrest they obtained record of him noting leaks in the server.

5 After Ulbricht’s arrest, evidence was discovered on his computer reflecting that IP address leaks were a recurring problem for him. In a file containing a log Ulbricht kept of his actions in administering the Silk Road website, there are multiple entries discussing various leaks of IP addresses of servers involved in running the Silk Road website and the steps he took to remedy them.  For example, a March 25, 2013 entry states that the server had been “ddosd” – i.e., subjected to a distributed denial of service attack, involving flooding the server with traffic – which, Ulbricht concluded, meant “someone knew the real IP.” The entry further notes that it appeared someone had “discovered the IP via a leak” and that Ulbricht “migrated to a new server” as a result. A May 3, 2013 entry similarly states: “Leaked IP of webserver to public and had to redeploy/shred [the server].” Another entry, from May 26, 2013, states that, as a result of changes he made to the Silk Road discussion forum, he “leaked [the] ip [address of the forum server] twice” and had to change servers.


7 Several months earlier, the FBI had developed a lead on a different server at the same Data Center in Iceland (“Server-1”), which resulted in an official request for similar assistance with respect to that server on February 28, 2013. See Ex. B. Due to delays in processing the request, Icelandic authorities did not produce traffic data for Server-1 to the FBI until May 2013. See Ex. A. By the time the FBI received the Server-1 traffic data, there was little activity on Server-1, indicating that it was no longer hosting a website. (As a result, the FBI did not request that Icelandic authorities proceed with imaging Server-1.) There was still some outbound Tor traffic flowing from Server-1, though, consistent with it being used as a Tor node; yet Server-1 was not included in the public list of Tor nodes, see supra n.4. Based on this fact, I believed, by the time of the June 12 Request, that the administrator of Silk Road was using Server-1 as a Tor “bridge” when connecting to the SR Server, as indicated in the June 12 Request. See Ex. A, at 1. (A Tor “bridge” is a private Tor node that can be used to access the Tor network, as opposed to using a
public Tor node that could be detected on one’s Internet traffic. See Tor: Bridges, available at http://torproject.org/docs/bridges.) To be clear, however, the traffic data obtained for Server-1 did not reflect any connection to, or otherwise lead to the identification of, the Subject IP Address. The Subject IP Address was independently identified solely by the means described above – i.e., by examining the traffic data sent back from the Silk Road website when we interacted with its user login interface.

The two other details that point to June 6 may not actually exonerate Ulbricht. Silk Road’s live-ssl config file was altered on June 7, which is the earliest date for the site configuration provided in discovery (though page 23 has some additional dates).

The mtime for the live-ssl configuration file provided in Item 1 of discovery is June 7, 2013, and the phpmyadmin configuration is July 6, 2013.8

8 Since Item 1 is the oldest image provided in discovery the defense does not have site configuration data prior to June 7, 2013.

And, as Horowitz reiterates, the earliest date for which the defense was provided discovery of a server imaging was June 6.

According to the government, the earliest image was captured June 6, 2013, and the latest in November 2013.

From a technical stand point, I’m not sure what to make of this.

A remarkable coincidence

It’s clear, however, that FBI was tracking Silk Road well before June, and for some reason decided to make June the official start date (and, perhaps more significantly, official discovery start date; they’ve refused earlier discovery because it won’ t be used in trial) of their investigation. At the same time, it seems that Ulbricht’s defense seems reluctant to explain why they’re asking for earlier discovery; perhaps that’s because they’d have to admit Ulbricht was aware of probes of the website before then. Forrest rejected their argument because Ulbricht refused to submit a declaration that this was his server.

But I am rather struck by the timing. As I said, the first Edward Snowden story — the June 5, 2013 Verizon release that could have no tie to the Silk Road investigation and, the next day, the WaPo and Guardian PRISM releases (there were very late Google and Facebook requests that seem like parallel construction, but since Ulbricht is a US citizen, his communications should not have been available via PRISM) — was roughly the day before the day Iceland imaged the other server.

I asked both Glenn Greenwald and Bart Gellman, and it seems the earliest the government could have had official notice of that story may have been late on June 4 though probably June 5 (things get funny with the Guardian, apparently, because of Greenwich Mean Time). A more relevant leak to the Silk Road investigation was the President’s Policy Directive on cyberwar — which Guardian published on June 7 (they may not have warned the government until that morning however).

So it may all be one big coincidence – that the government created a virgin birth for the Silk Road investigation that happened to be the same day that a torrent of leaks on the NSA and GCHQ started, ultimately revealing things like the government’s targeting of the Tor network (just days after Ulbricht was arrested on October 2, 2013).

But it certainly seems possible that those investigating Silk Road felt the need to begin to roll up the investigation as that torrent of leaks started, perhaps worrying that the methods they (or GCHQ) were using might be exposed before they had collected the evidence.

Update: A few more points about this. My suspicion is that, if there is a tie between the Snowden leaks and the Silk Road investigation, it stems from the government’s recognition that some of the methods it used to find Ulbricht would become known through Snowden’s leaks, so it moved to establish an alternate means of discovery before Ulbricht might learn of those actual methods. As one example, recall that subsequent to Snowden’s leaks about XKeyscore, Jacob Appelbaum got information showing XKeyscore tracks those who use Tor. While there are a number of things it seems Ulbricht’s lawyers believe were parallel constructed (unnamed “law enforcement officers” got warrants for his Gmail and Facebook accounts in September), they most aggressively fought the use of a Title III Pen Register to track IP addresses personally associated with Ulbricht, also in September. It seems that would have been available via other means, especially XKeyscore, especially since by encrypting communication Ulbricht’s communications could be retained indefinitely under NSA’s minimization procedures.

Additionally, the language the government used to refuse information on a range of law enforcement and spying agencies sure sounds like they clean teamed this investigation.

The Government also objects to the unbounded definition of the term “government” set forth in the September 17 Requests. Specifically, the requests ask the prosecution to search for information within “not only the United States Attorney’s Office for the Southern District of New York, but also the Offices in all other Districts, any and all government entities and law enforcement agencies, including but not limited to the Federal Bureau of Investigation, Central Intelligence Agency, Drug Enforcement Administration, Immigration and Customs Enforcement Homeland Security Investigations, National Security Agency, and any foreign government and/or intelligence agencies, particularly those with which the U.S. has a cooperative intelligence gathering relationship, i.e., Government Communications Headquarters (“GCHQ”), the British counterpart to the NSA.”

Even in the Brady context, the law is clear that a prosecutor has a duty to learn only of “evidence known to . . . others acting on the government’s behalf in the case.”

The government is not denying they had other means to identify Ulbricht (nor is it denying that it worked with partners like GCHQ on this). Rather, it is just claiming that the FBI officers involved in this prosecution didn’t see those methods.

The Other Blind Spot in NSA’s EO 12333 Privacy Report: Research

Yesterday, I laid out the biggest reason the NSA Privacy Officer’s report on EO 12333 was useless: she excluded most of NSA’s EO 12333 collection — its temporary bulk collection done to feed XKeyscore and its more permanent bulk collection done to hunt terrorists and most other NSA targets — from her report. Instead, Privacy Officer Rebecca Richards’ report only covered a very limited part of NSA’s EO 12333 spying, that targeting people like Angela Merkel.

But I wanted to circle back and note two other things she did which I find telling.

First, note what Richards didn’t do. The standard by which she measured NSA’s privacy efforts is a NIST standard called Fair Information Practice Principles, which include the following:

  • Transparency
  • Individual Participation
  • Purpose Specification
  • Data Minimization
  • Use Limitation
  • Data Quality and Integrity
  • Security
  • Accountability and Auditing

She dismisses the first two because NSA is a spook organization.

Because NSA has a national security mission, the principles of Transparency and Individual Participation are not implemented in the same manner they are in organizations with a more public facing mission.

In the process, she overstates how assiduously NSA lets Congress or DOJ review EO 12333 activities.

For the rest, however, Richards doesn’t — as she should have — assess NSA’s compliance with each category. Had she done so, she would have had to admit that PCLOB found NSA’s retention under the Foreign Intelligence purpose to be far too broad, putting NSA in violation of Purpose Specification; she would have had to admit that NSA gets around Use Limitation with broad permissions to create technical databases and keep all encrypted communications; she would have had to admit that of NSA’s violations, 9% constitute a willful refusal to follow Standard Operating Procedures, a stat that would seem to belie her Accountability claims.

Rather than assessing whether NSA complies with these principles, then, Richards simply checks them off at the end of each of several sections on the SIGINT Production Cycle.

ACQUIRE, Targeting: “The existing civil liberties and privacy protections fall into the following FIPPs: Transparency (to overseers), Purpose Specification, and Accountability and Auditing.”

ACQUIRE, Collection and Processing: “The existing civil liberties and privacy protections fall into three FIPPs categories: Data Minimization, Purpose Specification and Accounting and Auditing.”

ANALYZE: “These existing civil liberties and privacy protections fall into the following FIPPs: Transparency (to overseers), Purpose Specification, Data Minimization, and Accountability and Auditing.”

RETAIN: “These existing civil liberties and privacy protections fall into the following two FIPPs: Data Minimization, and Security.”

DISSEMINATE: “The existing civil liberties and privacy protections fall into the following FIPPs: Use Limitations, Data Minimization, and Accountability and Auditing.”

Then, having laid out how the NSA does some things that fall into some of these boxes at each step of the SIGINT process, she concludes,

CLPO documented NSA’s multiple activities that provide civil liberties and privacy protections for six of the eight FIPPs that are underpinned by its management activities, documented compliance program, and investments in people, training, tools, and technology.

Fact check! Even buying her claim that checking the box for some of these things at each step of the process is adequate to assessing whether it fulfills FIPP, note that she hasn’t presented any evidence NSA meets NIST’s “Data Quality and Integrity” claim (though that may just be sloppiness on her part, a further testament to the worthlessness of this review).

But there’s another huge problem with this approach.

By fulfilling her privacy review by checking the boxes for the SIGINT Production Cycle (just for the targeted stuff, remember, not for the bulk of what NSA does), Richards leaves out all the other things the NSA does with the world’s data. Most notably, she doesn’t consider the privacy impacts of NSA’s research – what is called SIGDEV – which NSA and its partners do with live data. Some of the most aggressive programs revealed by Edward Snowden’s leaks — especially to support their hacking and infiltration activities — were SIGDEV presentations. Even on FISA programs, SIGDEV is subjected to nowhere near the amount of auditing that straight analysis is.

And the most significant known privacy breach in recent years involved the apparent co-mingling of 3,000 files worth of raw Section 215 phone dragnet data with Stellar Wind data on a research server. NSA destroyed it all before anyone could figure out what it was doing there, how it got there, or what scope “3,000″ files entailed.

In my obsessions with the poor oversight over the phone dragnet techs, I have pointed to this description several times.

As of 16 February 2012, NSA determined that approximately 3,032 files containing call detail records potentially collected pursuant to prior BR Orders were retained on a server and been collected more than five years ago in violation of the 5-year retention period established for BR collection. Specifically, these files were retained on a server used by technical personnel working with the Business Records metadata to maintain documentation of provider feed data formats and performed background analysis to document why certain contact chaining rules were created. In addition to the BR work, this server also contains information related to the STELLARWIND program and files which do not appear to be related to either of these programs. NSA bases its determination that these files may be in violation of BR 11-191 because of the type of information contained in the files (i.e., call detail records), the access to the server by technical personnel who worked with the BR metadata, and the listed “creation date” for the files. It is possible that these files contain STELLARWIND data, despite the creation date. The STELLARWIND data could have been copied to this server, and that process could have changed the creation date to a timeframe that appears to indicate that they may contain BR metadata.

The NSA just finds raw data mingling with data from the President’s illegal program. And that’s all the explanation we get for why!

Well, PCLOB provides more explanation for why we don’t know what happened with that data.

In one incident, NSA technical personnel discovered a technical server with nearly 3,000 files containing call detail records that were more than five years old, but that had not been destroyed in accordance with the applicable retention rules. These files were among those used in connection with a migration of call detail records to a new system. Because a single file may contain more than one call detail record, and because the files were promptly destroyed by agency technical personnel, the NSA could not provide an estimate regarding the volume of calling records that were retained beyond the five-year limit. The technical server in question was not available to intelligence analysts.

This is actually PCLOB being more solicitous in other parts of the report. After all, it’s not just that there was a 5 year data retention limit on this data, there was also a mandate that techs destroy data once they’re done fiddling with it. So this is a double violation.

And yet NSA’s response to finding raw data sitting around places is to destroy it, making it all the more difficult to understand what went on with it?

Richards may be referring to this kind of oopsie when she talks about “spillage” being a risk related to retention.

The civil liberties and privacy risks related to retention are that NSA (1) may possibly retain data that it is no longer authorized to retain; (2) may possibly fail to completely remove data the Agency was not authorized to acquire; and (3) may potentially lose data because of “spillage,” improper intentional disclosure, or malicious exfiltration.

But nowhere does she consider the privacy implications of having a “technical database” data retention exemption even for Section 702 data, and then subjecting that raw data to the most exotic projects NSA’s research staff can think of.

And given that she elsewhere relies on President Obama’s PPD-28 as if it did anything to protect privacy, note that that policy specifically exempts SIGDEV from its limits.

Unless otherwise specified, this directive shall apply to signals intelligence activities conducted in order to collect communications or information about communications, except that it shall not apply to signals intelligence activities undertaken to test or develop signals intelligence capabilities.

We know NSA doesn’t abide by privacy rules for its research function. Not only does that mean a lot of probably legitimate research evades scrutiny, it also creates a space where NSA can conduct spying, in the name of research, that wouldn’t fulfill any of these privacy protections.

That’s a glaring privacy risk. One she chooses not to mention at all in her report.

NSA’s Privacy Officer Exempts Majority of NSA Spying from Her Report on EO 12333 Collection

NSA’s Director of Civil Liberties and Privacy, Rebecca Richards, has another report out, this time on “Civil Liberties and Privacy Protections” provided in the Agency’s EO 12333 programs. As with her previous report on Section 702, this one is almost useless from a reporting standpoint.

The reason why it is so useless is worth noting, however.

Richards describes the scope of her report this way:

This report examines (1) NSA’s Management Activities that are generally applied throughout the Agency and (2) Mission Safeguards within the SIGINT mission when specifically conducting targeted3 SIGINT activities under E.O. 12333.

3 In the context of this paper, the phrase “targeted SIGINT activities” does not include “bulk” collection as defined in Presidential Policy Directive (PPD)-28. Footnote 5 states, in part, “References to signals intelligence collected in ‘bulk’ mean the authorized collection of large quantities of signals intelligence data which, due to technical or operational considerations, is acquired without the use of discriminants (e.g., specific identifiers, selection terms, etc.).”

Richards neglects to mention the most important details from PPD-28 on bulk collection: when collection in “bulk” is permitted.

Locating new or emerging threats and other vital national security information is difficult, as such information is often hidden within the large and complex system of modern global communications. The United States must consequently collect signals intelligence in bulk5 in certain circumstances in order to identify these threats. Routine communications and communications of national security interest increasingly transit the same networks, however, and the collection of signals intelligence in bulk may consequently result in the collection of information about persons whose activities are not of foreign intelligence or counterintelligence value. The United States will therefore impose new limits on its use of signals intelligence collected in bulk. These limits are intended to protect the privacy and civil liberties of all persons, whatever their nationality and regardless of where they might reside.

In particular, when the United States collects nonpublicly available signals intelligence in bulk, it shall use that data only for the purposes of detecting and countering: (1) espionage and other threats and activities directed by foreign powers or their intelligence services against the United States and its interests; (2) threats to the United States and its interests from terrorism; (3) threats to the United States and its interests from the development, possession, proliferation, or use of weapons of mass destruction; (4) cybersecurity threats; (5) threats to U.S. or allied Armed Forces or other U.S or allied personnel; and (6) transnational criminal threats, including illicit finance and sanctions evasion related to the other purposes named in this section. In no event may signals intelligence collected in bulk be used for the purpose of suppressing or burdening criticism or dissent; disadvantaging persons based on their ethnicity, race, gender, sexual orientation, or religion; affording a competitive advantage to U.S. companies and U.S. business sectors commercially; or achieving any purpose other than those identified in this section.

5 The limitations contained in this section do not apply to signals intelligence data that is temporarily acquired to facilitate targeted collection. References to signals intelligence collected in “bulk” mean the authorized collection of large quantities of signals intelligence data which, due to technical or operational considerations, is acquired without the use of discriminants (e.g., specific identifiers, selection terms, etc.).

The NSA collects in “bulk” (that is, “everything”), temporarily, to facilitate targeted collection. This refers to the 3-5 day retention of all content and 30 day retention of all metadata from some switches so XKeyscore can sort through it to figure out what to keep.

And the NSA also collects in “bulk” (that is, “everything”) to hunt for the following kinds of targets:

  • Spies
  • Terrorists
  • Weapons proliferators
  • Hackers and other cybersecurity threats
  • Threats to armed forces
  • Transnational criminals (which includes drug cartels as well as other organized crime)

Of course, when NSA collects in “bulk” (that is, “everything”) to hunt these targets, it also collects on completely innocent people because, well, it has collected everything.

So at the start of a 17-page report on how many “civil liberties and privacy protections” the NSA uses with its EO 12333 collection, NSA’s Privacy Officer starts by saying what she’s about to report doesn’t apply to NSA’s temporary collection of  everything to sort through it, nor does it apply to its more permanent collection of everything to hunt for spies, terrorists, weapons proliferators, hackers, and drug bosses.

That is, the “civil liberties and privacy protections” Richards describe don’t apply to the the great majority of what NSA does. And these “civil liberties and privacy protections” don’t apply until after NSA has collected everything and decided, over the course of 5 days, whether it wants to keep it and in some places, kept everything to be able to hunt a range of targets.

This actually shows up in Richards’ report, subtly, at times, as when she emphasizes that her entire “ACQUIRE” explanation focuses on “targeted SIGINT collection.” What that means, of course, is that process, where collecting only takes place after an NSA analyst has targeted the collection? It doesn’t happen in the majority of cases.

Once you collect and sort through everything, does it really make sense to claim you’re providing civil liberties and privacy protections?

Protect America Act Was Designed to Collect on Americans, But DOJ Hid that from the FISC

The government released a document in the Yahoo dump that makes it clear it intended to reverse target Americans under Protect America Act (and by extension, FISA Amendments Act). That’s the Department of Defense Supplemental Procedures Governing Communications Metadata Analysis.

The document — as released earlier this month and (far more importantly) as submitted belatedly to the FISC in March 2008 — is fairly nondescript. It describes what DOD can do once it has collected metadata (irrespective of where it gets it) and how it defines metadata. It also clarifies that, “contact chaining and other metadata analysis do not qualify as the ‘interception’ or ‘selection’ of communcations, nor to they qualify as ‘us[ing] a selection term’.”

The procedures do not once mention US persons.

There are two things that should have raised suspicions at FISC about this document. First, DOJ did not submit the procedures to FISC in a February 20, 2008 collection of documents they submitted after being ordered to by Judge Walton after he caught them hiding other materials; they did not submit them until March 14, 2008.

The signature lines should have raised even bigger suspicions.

Gates Mukasey

First, there’s the delay between the two dates. Robert Gates, signing as Secretary of Defense, signed the document on October 17, 2007. That’s after at least one of the PAA Certifications underlying the Directives submitted to Yahoo (the government is hiding the date of the second Certification for what I suspect are very interesting reasons), but 6 days after Judge Colleen Kollar-Kotelly submitted questions as part of her assessment of whether the Certifications were adequate. Michael Mukasey, signing as Attorney General, didn’t sign the procedures until January 3, 2008, two weeks before Kollar-Kotelly issued her ruling on the certifications, but long after it started trying to force Yahoo to comply and even after the government submitted its first ex parte submission to Walton. That was also just weeks before the government redid the Certifications (newly involving FBI in the process) underlying PAA on January 29. I’ll come back to the dates, but the important issue is they didn’t even finalize these procedures until they were deep into two legal reviews of PAA and in the process of re-doing their Certifications.

Moreover, Mukasey dawdled two months before he signed them; he started at AG on November 9, 2007.

Then there’s the fact that the title for his signature line was clearly altered, after the fact.

Someone else was supposed to sign these procedures. (Peter Keisler was Acting Attorney General before Mukasey was confirmed, including on October 17, when Gates signed these procedures.) These procedures were supposed to be approved back in October 2007 (still two months after the first PAA Certifications) but they weren’t, for some reason.

The backup to those procedures — which Edward Snowden leaked in full — may explain the delay.

Those procedures were changed in 2008 to reverse earlier decisions prohibiting contact chaining on US person metadata. 

NSA had tried to get DOJ to approve that change in 2006. But James Baker (who was one of the people who almost quit over the hospital confrontation in 2004 and who is now FBI General Counsel) refused to let them.

After Baker (and Alberto Gonzales) departed DOJ, and after Congress passed the Protect America Act, the spooks tried again. On November 20, 2007, Ken Wainstein and Steven Bradbury tried to get the Acting Deputy Attorney General Craig Morford (not Mukasey, who was already AG!) to approve the procedures. The entire point of the change, Wainstein’s memo makes clear, was to permit the contact chaining of US persons.

The Supplemental Procedures, attached at Tab A, would clarify that the National Security Agency (NSA) may analyze communications metadata associated with United States persons and persons believed to be in the United States.

What the government did, after passage of the PAA, was make it permissible for NSA to figure out whom Americans were emailing.

And this metadata was — we now know — central to FISCR’s understanding of the program (though perhaps not FISC’s; in an interview today I asked Reggie Walton about this document and he simply didn’t remember it).

The new declassification of the FISCR opinion makes clear, the linking procedures (that is, contact chaining) NSA did were central to FISCR’s finding that Protect America Act, as implemented in directives to Yahoo, had sufficient particularity to be reasonable.

The linking procedures — procedures that show that the [redacted] designated for surveillance are linked to persons reasonably believed to be overseas and otherwise appropriate targets — involve the application of “foreign intelligence factors” These factors are delineated in an ex parte appendix filed by the government. They also are described, albeit with greater generality, in the government’s brief. As attested by affidavits  of the Director of the National Security Agency (NSA), the government identifies [redacted] surveillance for national security purposes on information indicating that, for instance, [big redaction] Although the FAA itself does not mandate a showing of particularity, see 50 U.S.C. § 1805(b). This pre-surveillance procedure strikes us as analogous to and in conformity with the particularly showing contemplated by Sealed Case.

In fact, these procedures were submitted to FISC and FISCR precisely to support their discussion of particularity! We know they were using these precise procedures with PAA because they were submitted to FISC and FISCR in defense of a claim that they weren’t targeting US persons.

Except, by all appearances, the government neglected to tell FISC and FISCR that the entire reason these procedures were changed, subsequent to the passage of the PAA, was so NSA could go identify the communications involving Americans.

And this program, and the legal authorization for it? It’s all built into the FISA Amendments Act.

Raez Qadir Khan: Hoisting the FBI on Its Own Metadata Problems


As I said earlier, the lawyers defending Pakistani-American Raez Qadir Khan — who is accused of material support of terrorist training leading up to an associate’s May 2009 attack on the ISI in Pakistan — are doing some very interesting things with the discovery they’ve gotten.

Request for Surveillance Authorities

The first thing they did, in a July 14, 2014 filing, was to list all the kinds of surveillance they’ve been shown in discovery with a list of possible authorities that might be used to conduct that surveillance. The motion is an effort to require the government to describe what it got how.

The table above is my summary of what the motion reveals and shows only if a particular kind of surveillance happened during a given year; it only gives more specific dates for one-time events.

The brown (orange going dark!) reflects that emails were turned over in discovery from this period, but that the 2013 search warrant apparently says “authorization to collect emails existed from August 2009 to May 2012.” That’s not necessarily damning; they could get those earlier emails legitimately via a number of avenues that don’t involve “collecting” them. But it is worth noting for reasons I explain below.

The filing itself includes tables with more specific dates, Bates numbers, possible authorities, and — where relevant — search warrant items reliant on the items in question. It also describes surveillance they know to have occurred — further Internet and email surveillance, for example, a 2009 search of Khan’s apartment, as well as surveillance in later 2012 — that was not turned over in discovery.

Effectively, the motion lays out all the possible authorities that might be used to collect this data and then makes very visible that the criminal search warrant was derivative of it (there’s a bit of a problem, because the warranted March 2013 search actually took place after the indictment, and so Khan’s indictment can’t be entirely derivative of this stuff; that relies largely on emails).

I also think some of the authorities may not be comprehensive; for example, the pre-2009 emails may have been a physical FISA search. We also know FISC has permitted the government to collect URL searches under Section 215.

But it’s a damn good summary of the multiple authorities the government might use to obtain such information, by itself a superb demonstration of the many ways the government can obtain and parallel construct evidence.

The filing seems to suggest that the investigation started in fall 2009, some months after Khan’s alleged co-conspirator, Ali Jalil, carried out a May 2009 suicide attack in Pakistan. If that’s right, then the government obtained miscellaneous records (which is not at all surprising; these are things like immigration and PayPal records), email content, and call detail records retroactively. Alternately (Jalil was arrested in the Maldives in April 2006 and interrogated by people presenting themselves as FBI), the government conducted all the other surveillance back to 2005 in real time, but doesn’t want to show Khan’s team it has. In a response to this motion, the government claims that when the surveillance of Khan began is classified.

The motion for a description of which authorities the government used to obtain particular information is still pending.

Motion to Throw Out the Emails

Here’s where things get interesting.

On September 15, Khan’s lawyers submitted a filing moving to throw out all the email evidence (which is the bulk of what has been shown so far and — as I said — most of what the indictment relies on). It argues the 504 emails provided in discovery — spanning from February 2005 to February 2012–lack much of the metadata detail necessary to be submitted as authenticated evidence. Some of the problems, but by no means all, stem from FBI having printed out the emails, hand-redacted them, then scanned them and sent them as “electronic production” to Khan’s lawyers.

That argument is highly unlikely to get anywhere on its own, though a declaration from a forensics expert does raise real questions about the inconsistency of the metadata provided in discovery.

But the filing does pose interesting questions that — in conjunction with questions about the authorities used to investigate Khan — may be more fruitful.

Continue reading

The Hemisphere Decks: A Comparison and Some Hypotheses

Last week, Dustin Slaughter published a story using a new deck of slides on the Hemisphere program, the Drug Czar program that permits agencies to access additional telecommunications analytical services to identify phones, which then gets laundered through parallel construction to hide both how those phones were found, as well as the existence of the program itself.

It has some significant differences from the deck released by the New York Times last year.  I’ve tried to capture the key differences here:

140915 Hemisphere Comparison


The biggest difference is that the NYT deck — which must date to no earlier than June 2013 — draws only from AT&T data, whereas the Declaration deck draws from other providers as well (or rather, from switches used by other providers).

In addition, the Declaration deck seems to reflect approval for use in fewer states (given the mention of CA court orders and the recent authorization to use Hemisphere in Washington in the AT&T deck), and seems to offer fewer analytical bells and whistles.

Thus, I agree with Slaughter that his deck predates — perhaps by some time — the NYT/AT&T deck released last year.  That would mean Hemisphere has lost coverage, even while it has gained new bells and whistles offered by AT&T.

While I’m not yet sure this is my theory of the origin of Hemisphere, some dates are worth noting:

From 2002 to 2006, the FBI had telecoms onsite to provide CDRs directly from their systems (the FBI submitted a great number of its requests without any paperwork). One of the services provided — by AT&T — was community of interest tracking. Presumably they were able to track burner phones (described as dropped phones in these decks) as well.

In 2006, FBI shut down the onsite access, but retained contracts with all 3 providers (AT&T, Verizon, and probably Sprint). In 2009, one telecom — probably Verizon – declined to renew its contract for whatever the contract required.

AT&T definitely still has a contract with FBI, and in recent years, it has added more services to what it offers the FBI.

It’s possible the FBI multi-provider access moved under ONCDP (the Drug Czar) in 2007 as a way to retain its authorities without attracting the attention of DOJ’s excellent Inspector General (who is now investigating this in any case). Though I’m not sure that program provided the local call records the deck at least claims it could have offered. I’m not sure that program got to the telecom switches the way the deck seems to reflect. It’s possible, however, that the phone dragnet in place before it was moved to Section 215 in 2006 did have that direct access to switches, and the program retained this data for some years.

The phone dragnet prior to 2006 and NSL compliance (which is what the contracts with AT&T and one other carrier purportedly provide now) are both authorized in significant part (and entirely, before 2006) through voluntary compliance, per David Kris, the NSA IG Report, and the most recent NSL report. That’s a big reason why the government tried to keep this secret — to avoid any blowback on the providers.

In any case, if I’m right that the program has lost coverage (though gained AT&T’s bells and whistles) in the interim, then it’s probably because providers became unwilling, for a variety of reasons (and various legal decisions on location data are surely one of them) to voluntarily provide such information anymore. I suspect that voluntary compliance got even more circumscribed with the release of the first Horizon deck last year.

Which means the government is surely scrambling to find additional authorities to coerce this continued service.

1 2 3 7
Emptywheel Twitterverse
emptywheel @lizzwinstead Get the feeling there was some quick and broad and well-financed coordination on that response.
emptywheel @JasonLeopold Bingo. Saw that. Need to go back to that one. Timing is very interesting too. And the FoPo withholding.
emptywheel @JasonLeopold Oh, I remember. So this is CIA White PAper and the other is the DOJ White Paper? Or just that DOJ didn't refer other to CIA?
emptywheel @JasonLeopold Ah thanks.
emptywheel RT @TondaMacC: The easiest form of terrorism: no need for sophisticated plots, or training, or financing, by @shephardm: http://t.co/DSQdPF
emptywheel @JasonLeopold Which one is that on--11/11 or 5/11?
emptywheel RT @JasonLeopold: JUST FILED: CIA declaration in my #FOIA case re: CIA white ppr turned over to me justifying assassination of Awlaki https…
emptywheel @maassp Thanks for pointing that out. It has been a largely male celebration of a great career.
emptywheel @liferstate Not convinced any cohort is succeeding at this point.
emptywheel @liferstate Good point. But in the meantime our collective pants-peeing will prevent any effort to address climate change.
emptywheel @maassp I was interested in your comment abt being white male--his tributes are mostly from white men. Bc of the time?
emptywheel RT @abc7newsBayArea: JUST IN: Dallas nurse Amber Vinson who contracted Ebola from Thomas Duncan is virus-free. http://t.co/mNDQTT1jd3 http:…
October 2014
« Sep