Deconfliction in Dragnet Databases

Hemisphere Deconfliction

I want to return to something that appears in both of the Hemisphere slide decks we’ve seen: Deconfliction.

In addition to helping law enforcement find burner phones and contact chains, using connections that include location, Hemisphere helps deconflict between multiple investigative teams.

When multiple teams are working the same targets — in war or criminal investigations — you need to be aware of what other teams are doing. In war, this helps to ensure you don’t shoot a friendly. In investigations, it helps to protect turf and combine efforts.

In investigations — especially drug or terrorism ones that rely on informants — it also helps to distinguish legally sanctioned crime — that of informants — from that which no law enforcement agency is directing. And, as the Declaration deck explains, Hemisphere checks new queries against previous ones, and emails requestors if someone has already chained on that contact.

  • Target numbers, as well as every number they call and that call them will be cross checked against other Hemisphere results
  • Notification will be by email if applicable
  • The email provides contact information for all requestors

In other words, in addition to the way it serves as a quick investigative tool, Hemisphere also helps drug investigators to avoid stepping on each others’ toes (or at least communicate better).

Then there’s this:

  • Sensitive case information is masked

This seems to suggest Hemisphere doesn’t, presumably, provide any hints about how the original investigator is conducting their investigation, whether suspected traffickers are bring run or not. That’s the kind of thing that would be “masked.” (Note, this suggests that whoever is running this database would have access to that masked information.)

I raise all this because it poses questions for other databases involving informants. As I have noted, FBI uses the phone dragnet (and therefore presumably the Internet dragnet in whatever form and geographic locale it still exists) to identify potential informants. And one thing FBI does with its back door searches during assessments assessments is review actual content collected under traditional FISA and FAA in its quest for informants.

These dragnet databases play a key role in the selection and recruitment of informants to use in terrorism investigations.

But then what happens?

The example of David Headley — who played a crucial role in one of the most lethal terrorist attacks since 9/11, the Mumbai attack, the early period of which while he served as an informant for the DEA — is instructive. The FBI likes to boast that Section 702 helped stop Headley’s plot against Danish cartoonists. But Headley’s case should, instead, raise real questions about how it is a terrorist can plan a complicated terrorist attack while his known terrorist colleagues, presumably, are being surveilled without detection by the people supposedly handling him.

We know that the metadata dragnets, at least, put some identifiers on a “defeat list.” There’s reason to suspect (in part from the syntax of redacted references to the defeat list) they do so not just for high volume numbers, but for sensitive numbers (perhaps Congress, for example). But I also think they may put informants on a defeat list too. That’s, in part, because if you didn’t do so their handlers would become two degrees from terrorist suspects, which might have all sorts of unintended consequences. That’s just an educated guess, mind you, but if I’m right it would have some interesting implications.

That doesn’t appear to have prevented DEA from tracking Manssor Arbabsiar, the Scary Iran Plotter (I assume he at least used to be an informant, because there’s little else that would explain why the cousin of a top Quds Force Member busted for drug possession would nevertheless get citizenship, and deconfliction discussions show up in what was probably his immigration file).

But it would raise really big questions in other cases.

One way or another they need to give informants special treatment in databases — as they apparently do in Hemisphere. How they do so, however, may have real consequences for the efficacy of the entire dragnet.

Maybe the Spooks Don’t Want FTC to Know NSA’s Tricks?

In awesome news, the Federal Trade Commission has hired Ashkan Soltani — the tech expert who helped Bart Gellman on many of his most important Snowden scoops — as its new Chief Technology Officer.

The news has elicited wails from NSA’s mail mouthpieces, Stewart Baker and Michael Hayden.

“I’m not trying to demonize this fella, but he’s been working through criminally exposed documents and making decisions about making those documents public,” said Michael Hayden, a former NSA director who also served as CIA director from 2006 to 2009. In a telephone interview with FedScoop, Hayden said he wasn’t surprised by the lack of concern about Soltani’s participation in the Post’s Snowden stories. “I have no good answer for that.”

[snip]

Stewart Baker, a former NSA general counsel, said, while he’s not familiar with the role Soltani would play at the FTC, there are still problems with his appointment. “I don’t think anyone who justified or exploited Snowden’s breach of confidentiality obligations should be trusted to serve in government,” Baker said.

I find Hayden’s wails especially disgusting, given the way — it is now clear — the government spent so much effort covering up how he extended the illegal wiretap program in March 2004. I mean, I’m not trying to demonize the fella, but he’s a criminal, and yet he’s complaining about the press reporting on abuses?

That said, I’m curious whether this isn’t the real reason there seems to be organized pushback against Soltani’s hire.

Soltani is scheduled to give a presentation Nov. 19 at the Strata+Hadoop World conference in Barcelona, Spain, on “how commercial tracking enables government surveillance.” According to the conference website, Soltani’s presentation will explore how “the dropping costs of bulk surveillance is aiding government eavesdropping, with a primary driver being how the NSA leverages data collected by commercial providers to collect information about innocent users worldwide.”

At FTC, Soltani will be in a role where he can directly influence the kind of regulatory pressure placed on data collectors to protect user privacy. He understands — probably far more than we know from the WaPo stories — how NSA is capitalizing on already collected data. Which means he may be able to influence how much remains available to the spooks.

So maybe all this wailing is an effort to sustain the big commercial data’s unwitting support for big spooky data?

Wyden Doesn’t Know What NSA Does with Its Dragnet Overseas

Kim Zetter has an interview with Ron Wyden that goes over a number of things I have already reported. She describes him hedging when asked when he first learned of the phone dragnet; as I have shown the government did not brief the Internet dragnet to the Intelligence Committees, not even during the PATRIOT reauthorization in 2005. Wyden describes the months — “literally months” –during which he tried to get the Intelligence Community to correct what Keith Alexander had said to DefCon before he asked James Clapper the question he is now so famous for; I laid that out here and here. Wyden describes how — “incredible as it sounds” — the Bush Administration shut down NSA’s back door search authorities., which I noted here. Zetter and Wyden also discuss how to manage zero day exploits.

But the most important detail in the interview, in my opinion, comes where Wyden makes clear he doesn’t know enough about what the government does under EO 12333.

But no one, not even lawmakers on Capitol Hill, have a full grasp of how EO 12333 is being used.

Wyden says, “I’m not sure we’re at the bottom or close to it” when it comes to understanding how it’s being used.” Wyden is suspicious that the White House and intelligence community have agreed to halt the phone records collection program, in the wake of intense criticism, only because the spy agency has other tricks to get the same data, possibly through EO 12333.

“The intelligence community is endorsing eliminating bulk-collection of phone records, and it makes me wonder what are the authorities under 12333 [through which they might do the same thing]?” he asks. “You can get a bill passed and everybody says, ‘Hey we banned bulk collection.’… [Then] we see the government go off in another direction. I will tell you that I don’t know today the full ramifications of 12333 on bulk collection. But I’m going to be spending a lot of time digging into it.”

I had pointed to Wyden’s concern about this issue when he raised it at the turn of the year and noted that the Administration made public its belief it can engage in the phone and Internet dragnet without any Congressional authorization just as the USA Freedom Act debate resumed.

But  Wyden’s confirmation that he doesn’t know what the government does overseas raises questions about, first, whether he knows what the government did with the Internet dragnet when he and Udall convinced the government to end the domestic collection of it in 2011. But it also underscores just how empty are the promises that there is adequate oversight of the NSA’s work.

If someone on the Intelligence Committees (a critic, admittedly, but he is one of the legal overseers of the Agency) doesn’t know, and doesn’t think he’d necessarily know, if the government replaced a congressionally limited program with the same program overseas, that means there’s no way the Intel Committees could ensure that the government had stopped practices Congress told it to stop.

Of course, given that Wyden got legislation passed in 2004 defunding any data mining of Americans only to have the Bush authorized dragnet continue, that must be a familiar position for the Senator.

FISCR Used an Outdated Version of EO 12333 to Rule Protect America Act Legal

If the documents relating to Yahoo’s challenge of Protect America Act released last month are accurate reflections of the documents actually submitted to the FISC and FISCR, then the government submitted a misleading document on June 5, 2008 that was central to FISCR’s ultimate ruling.

As I laid out here in 2009, FISCR relied on the the requirement  in EO 12333 that the Attorney General determine there is probable cause a wiretapping technique used in the US is directed against a foreign power to judge the Protect America Act met probable cause requirements.

The procedures incorporated through section 2.5 of Executive Order 12333, made applicable to the surveillances through the certifications and directives, serve to allay the probable cause concern.

The Attorney General hereby is delegated the power to approve the use for intelligence purposes, within the United States or against a United States person abroad, of any technique for which a warrant would be required if undertaken for law enforcement purposes, provided that such techniques shall not be undertaken unless the Attorney General has determined in each case that there is probable cause to believe that the technique is directed against a foreign power or an agent of a foreign power.

44 Fed. Reg. at 59,951 (emphasis supplied). Thus, in order for the government to act upon the certifications, the AG first had to make a determination that probable cause existed to believe that the targeted person is a foreign power or an agent of a foreign power. Moreover, this determination was not made in a vacuum. The AG’s decision was informed by the contents of an application made pursuant to Department of Defense (DOD) regulations. See DOD, Procedures Governing the Activities of DOD Intelligence Components that Affect United States Persons, DOD 5240.1-R, Proc. 5, Pt. 2.C.  (Dec. 1982).

Yahoo didn’t buy this argument. It had a number of problems with it, notably that nothing prevented the government from changing Executive Orders.

While Executive Order 12333 (if not repealed), provides some additional protections, it is still not enough.

[snip]

Thus, to the extent that it is even appropriate to examine the protections in the Executive Order that are not statutorily required, the scales of the reasonableness determination sway but do not tip towards reasonableness.

Yahoo made that argument on May 29, 2008.

Sadly, Yahoo appears not to have noticed the best argument that Courts shouldn’t rely on EO 12333 because the President could always change it: Sheldon Whitehouse’s revelation on December 7, 2007 (right in the middle of this litigation) that OLC had ruled the President could change it in secret and not note the change publicly. Whitehouse strongly suggested that the Executive in fact had changed EO 12333 without notice to accommodate its illegal wiretap program.

But the government appears to have intentionally withheld further evidence about how easily it could change EO 12333 — and in fact had, right in the middle of the litigation.

This is the copy of the Classified Annex to EO 12333 that (at least according to the ODNI release) the government submitted to FISCR in a classified appendix on June 5, 2008 (that is, after Yahoo had already argued that an EO, and the protections it affords, might change). It is a copy of the original Classified Appendix signed by Ed Meese in 1988.

As I have shown, Michael Hayden modified NSA/CSS Policy 1-23 on March 11, 2004, which includes and incorporates EO 12333, the day after the hospital confrontation. The content of the Classified Annex released in 2013 appears to be identical, in its unredacted bits, to the original as released in 1988 (see below for a list of the different things redacted in each version). So the actual content of what the government presented may (or may not be) a faithful representation of the Classified Appendix as it currently existed.

But the version of NSA/CSS Policy 1-23 released last year (starting at page 110) provides this modification history:

This Policy 1-23 supersedes Directive 10-30, dated 20 September 1990, and Change One thereto, dated June 1998. The Associate Director for Policy endorsed an administrative update, effective 27 December 2007 to make minor adjustments to this policy. This 29 May 2009 administrative update includes changes due to the FISA Amendments Act of 2008 and in core training requirements.

That is, Michael Hayden’s March 11, 2004 modification of the Policy changed to the Directive as existed before 2 changes made under Clinton.

Just as importantly, the modification history reflects “an administrative update” making “minor adjustments to this policy” effective December 27, 2007 — a month and a half after this challenge started.

By presenting the original Classified Appendix — to which Hayden had apparently reverted in 2004 — rather than the up-to-date Policy, the government was presenting what they were currently using. But they hid the fact that they had made changes to it right in the middle of this litigation. A fact that would have made it clear that Courts can’t rely on Executive Orders to protect the rights of Americans, especially when they include Classified Annexes hidden within Procedures.

In its language relying on EO 12333, FISCR specifically pointed to DOD 5240.1-R. The Classified Annex to EO 12333 is required under compliance with part of that that complies with the August 27, 2007 PAA compliance.

That is, this Classified Annex is a part of the Russian dolls of interlocking directives and orders that implement EO 12333.

And they were changing, even as this litigation was moving forward.

Only, the government appears to have hidden that information from the FISCR.

Update: Clarified that NSA/CSS Policy 1-23 is what got changed.

Update: Hahaha. The copy of DOD 5240.1 R which the government submitted on December 11, 2007, still bears the cover sheet labeling it as an Annex to NSA/CSS Directive 10-30. Which of course had been superseded in 2004.

Note how they cut off the date to hide that it was 1990?

Note how they cut off the date to hide that it was 1990?

Read more

Why Isn’t FBI Investigating the Hackers Who Broke into Google’s Cables?

At his Brookings event yesterday, Jim Comey claimed that there is a misperception, in the wake of the Snowden releases, about how much data the government obtains.

In the wake of the Snowden disclosures, the prevailing view is that the government is sweeping up all of our communications. That is not true. And unfortunately, the idea that the government has access to all communications at all times has extended—unfairly—to the investigations of law enforcement agencies that obtain individual warrants, approved by judges, to intercept the communications of suspected criminals.

[snip]

It frustrates me, because I want people to understand that law enforcement needs to be able to access communications and information to bring people to justice. We do so pursuant to the rule of law, with clear guidance and strict oversight. 

He goes onto pretend that Apple and Google are default encrypting their phone solely as a marketing gimmick, some arbitrary thing crazy users want.

Both companies are run by good people, responding to what they perceive is a market demand. But the place they are leading us is one we shouldn’t go to without careful thought and debate as a country.

[snip]

Encryption isn’t just a technical feature; it’s a marketing pitch. But it will have very serious consequences for law enforcement and national security agencies at all levels. Sophisticated criminals will come to count on these means of evading detection. It’s the equivalent of a closet that can’t be opened. A safe that can’t be cracked. And my question is, at what cost?

He ends with a plea that “our private sector partners … consider changing course.”

But we have to find a way to help these companies understand what we need, why we need it, and how they can help, while still protecting privacy rights and providing network security and innovation. We need our private sector partners to take a step back, to pause, and to consider changing course.

There’s something missing from Comey’s tale.

An explanation of why the FBI has not pursued the sophisticated criminals who stole Google’s data overseas.

At a recent event with Ron Wyden, the Senator asked Schmidt to weigh in on the phone encryption “kerfuffle.” And Schmidt was quite clear: the reason Google and Apple are doing this is because the NSA’s partners in the UK stole their data, even while they had access to it via PRISM.

The people who are criticizing this should have expected this. After Google was attacked by the British version of the NSA, we were annoyed and so we put end-to-end encryption at rest, as well as through our systems, making it essentially impossible for interlopers — of any kind — to get that information.

Schmidt describes the default encryption on the iPhone, notes that it has been available for the last 3 years on Android phones, and will soon be standard, just like it is on iPhone.

Law enforcement has many many ways of getting information that they need to provide this without having to do it without court orders and with the possible snooping conversation. The problem when they do it randomly as opposed to through a judicial process is it erodes user trust.

If everything Comey said were true, if this were only about law enforcement getting data with warrants, Apple — and Google especially — might not have offered their customers the privacy they deserved. But it turns out Comey’s fellow intelligence agency decided to just go take what they wanted.

And FBI did nothing to solve that terrific hack and theft of data.

I guess FBI isn’t as interested in rule of law as Comey says.

I Con the Record’s International Privacy Guidelines Swallowed Up by Exceptions

Screen Shot 2014-10-17 at 11.23.58 AMSometimes I Con the Record outdoes itself.

On Tuesday, the Guardian noted a scathing report UN Counterterrorism special rapporteur Ben Emmerson issued last month attacking British and US collection of bulk communications.

“Merely to assert – without particularization – that mass surveillance technology can contribute to the suppression and prosecution of acts of terrorism does not provide an adequate human rights law justification for its use. The fact that something is technically feasible, and that it may sometimes yield useful intelligence, does not by itself mean that it is either reasonable or lawful.”

[snip]

“It is incompatible with existing concepts of privacy for states to collect all communications or metadata all the time indiscriminately. The very essence of the right to the privacy of communication is that infringements must be exceptional, and justified on a case-by-case basis.”

Today, I Con the Record released a “Status Report” on an initiative President Obama ordered in his PPD-28 back in January to extend privacy protections to foreigners.

As we work to meet the January 2015 deadline, PPD-28 called on the Director of National Intelligence to prepare an interim report on the status of our efforts and to evaluate, in coordination with the Department of Justice and the rest of the Intelligence Community, additional retention and dissemination safeguards.

The DNI’s interim report is now being made available to the public in line with our pledge to share as much information about sensitive intelligence activities as is possible, consistent with our national security.

One thing this interim report requires is that “elements shall publicly release their PPD-28 implementation policies and procedures to the maximum extent possible.” Which requirement, you might assume, this release fulfills.

Which is why it’s so curious I Con the Record chose not to release an unclassified report mandated and mandating transparency — dated July 2014 — until October 2014.

Lest I be called a cynic, let me acknowledge that there are key parts of this that may represent improvements (or may not). The report asserts:

  • Foreigners will be treated with procedures akin to — though not identical to — those imposed by Section 2.3 of EO 12333
  • Just because someone is a foreigner doesn’t mean their information is foreign intelligence; the IC should “permanently retain or disseminate such personal information only if the personal information relates to an authorized intelligence requirement, is reasonably believed to be evidence of a crime, or meets one of the other standards for retention or dissemination identified in section 2.3” of EO 12333
  • The IC should consider adopting (though is not required to) retention periods used with US person data for foreign personal information (which is 5 years); the IC may get extensions, but only in 5-year chunks of time
  • When disseminating “unevaluated personal information,” the IC should make that clear so the recipient can protect it as such

Those are good things! Yeah us!

There are, however, a series of exceptions to these rules.

First, the guidelines in this report restate PPD-28’s unbelievably broad approval of the use of bulk data, in full. The report does include this language:

[T]he procedures must also reflect the limitations on the use of SIGINT collected in bulk. Moreover, Intelligence Community element procedures should include safeguards to satisfy the requirements of this section. In developing procedures to comply with this requirement, the Intelligence Community must be mindful that to make full use of intelligence information, an Intelligence Community element may need to use SIGINT collected in bulk together with other lawfully collected information. In such situations, Intelligence Community elements should take care to comply with the limitations applicable to the use of bulk SIGINT collection.

Unless I’m missing something, the only “limits” in this section are those limiting the use of bulk collection to almost all of NSA’s targets, including counterterrorism, cybersecurity, and crime, among other things. Thus, the passage not only reaffirms what amounts to a broad permission to use bulk, but then attaches those weaker handling rules to anything used in conjunction with bulk.

Then there are the other exceptions. The privacy rules in this document don’t apply to:

  • Evaluated intelligence (exempting foreigners’ data from the most important treatment US person data gets, minimization in finished intelligence reports; see footnote 3)
  • Personal information collected via other means than SIGINT (excluding most of what the CIA and FBI do, for example; see page 1)
  • Information collected via SIGINT not collecting communications or information about communications (seemingly excluding things like financial dragnets and pictures and potentially even geolocation, among a great many other things; see footnote 2)

And, if these procedures aren’t loosey goosey enough for you, the report includes this language:

It is important that elements have the ability to deviate from their procedures when national security requires doing so, but only with approval at a senior level within the Intelligence Community element and notice to the DNI and the Attorney General.

OK then.

Congratulations world! We’re going to treat you like Americans. Except in the majority of situations when we’ve decided not to grant you that treatment. Rest easy, though, knowing you’re data is sitting in a database for only 5 years, if we feel like following that rule.

Richard Burr Prepares to Capitalize on Refusing to Exercise Intelligence Oversight

In James Risen’s new book, he provides new details on what happened to the NSA whistleblowers — Bill Binney, Kurt Wiebe, Ed Loomis, Thomas Drake — who tried to stop President Bush’s illegal wiretap program, adding to what Jane Mayer wrote in 2011. He pays particular attention to the effort Diane Roark made, as a staffer overseeing NSA on the House Intelligence Committee, to alert people that the Agency was conducting illegal spying on Americans.

As part of that, Risen describes an effort Roark made to inform another Congressman of the program, one who had not been briefed: Richard Burr.

Despite the warning from (HPSCI’s Republican Staff Director Tim) Sample not to talk with anyone else on the committee about the program, she privately warned Chris Barton, the committee’s new general counsel, that “there was an NSA program of questionable legality and that it was going to blow up in their faces.” In early 2002, Roark also quietly arranged a meeting between Binney, Loomis, and Wiebe and Richard Burr, a North  Carolina Republican on the House Intelligence Committee. Binney told Burr everything they had learned about the NSA wiretapping program, but Burr hardly said a word in response. Burr never followed up on the matter with Roark, and there is no evidence he ever took any action to investigate the NSA program.

I’m not actually surprised that Burr learned the Intelligence Community was engaging in illegal behavior and did nothing. From what we’ve seen in his response to torture, he has served entirely to help CIA cover up the program and protect the torturers. Indeed, in his treatment of John Brennan’s confirmation, he made efforts to ensure Brennan would have to protect the torturers too.

So it’s no surprise that Burr heard details of an illegal program and ignored them.

Still, it’s worth highlighting this detail because, if Democrats do lose the Senate as they are likely to do in November, Richard Burr will most likely become Senate Intelligence Committee Chair. While Dianne Feinstein may be a badly flawed Chair overseeing the IC, Burr will be a nightmare, unloosing them to do whatever they’re ordered.

That’s the kind of career advancement that comes to a guy who remains silent about wrongdoing.

A Remarkable Date for the Virgin Birth of the Silk Road Investigation

As Wired first reported, there’s been an interesting exchange in the Silk Road prosecution. In September, the former FBI Agent who helped to bust accused Silk Road operator Ross Ulbricht, Christopher Tarbell, submitted a declaration explaining the genesis of the investigation by claiming the FBI got access to the Silk Road server because it became accessible via a non-Tor browser. In response, Ulbricht lawyer Joshua Horowitz submitted a declaration claiming Tarbell’s claims were implausible because the FBI wouldn’t have been able to get into Silk Road’s back end. The government responded by claiming that even if it did hack the website, it would not have been illegal.

Given that the SR Server was hosting a blatantly criminal website, it would have been reasonable for the FBI to “hack” into it in order to search it, as anysuch “hack” would simply have constituted a search of foreign property known to contain criminal evidence, for which a warrant was not necessary .

On Friday, Judge Katherine Forrest rejected Ulbricht’s efforts to throw out the evidence from the alleged hack, accepting the government’s argument that Ulbricht had no expectation of privacy on that server regardless of when and how the government accessed it.

The temporal problems with the government’s story

Most of the coverage on this exchange has focused on the technical claims. But just as interesting are the temporal claims. Horowitz summarizes that problem this way:

[S]everal critical files provided in discovery contain modification dates predating the first date Agent Tarbell claims Icelandic authorities imaged the Silk Road Server, thereby casting serious doubt on the chronology and methodology of his account;

The government claims that server was first imaged on July 23,2013.

As I’ll lay out below, Horowitz and Tarbell provide a lot of details suggesting something — perhaps the imaging of the server, perhaps something more — happened six weeks earlier.

But before we get there, consider the date: June 6, 2013.

June 6, 2013 was the day after the afternoon publication of the first Snowden leak, and the day before the Guardian made it clear their leak included cyberwar materials.

That is, the FBI claims to have officially “found” the Silk Road server at the same time the Snowden leaks started, even while they date their investigation to 6 weeks later.

The June 6 materials

FBI’s Tarbell is much vaguer about this timing than Ulbricht’s team is. As Tarbell tells it, on some unknown date in early June 2013, he and a colleague were sniffing Silk Road data when they discovered an IP not known to be tied to Tor.

In or about early June 2013, another member of CY-2 and I closely examined the traffic data being sent from the Silk Road website when we entered responses to the prompts contained in the Silk Road login interface.

That led them to look further, according to Tarbell. When he typed the IP into a non-Tor browser, he discovered it was leaking.

When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was “leaking” from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.

That led the government to ask Iceland, on June 12, to image the server. Iceland didn’t do so, according to the official narrative, until the next month.

The defense doesn’t buy this — in part, because Tarbell claims he didn’t adhere to forensics standard procedure by keeping copies of his packet sniffing.

Failure to preserve packet logs recorded while investigating the Silk Road servers would defy the most basic principles of forensic investigative techniques.

[snip]

[T]he government’s position is that former SA Tarbell conducted his investigation of Silk Road, and penetrated the Silk Road Server, without documenting his work in any way.

According to the government, the only record of Tarbell’s access to the server from this period is from access logs dated June 11.

[A]n excerpt of 19 lines from Nginx access logs, attached hereto as Exhibit 5, supposedly showing law enforcement access to the .49 server from a non-Tor IP address June 11, 2013, between 16:58:36 and 17:00:40. According to the Government, this is the only contemporaneous record of the actions described by the Tarbell Declaration at ¶¶ 7-8.9

Given that this bears a particular date, I find it all the more curious that Tarbell doesn’t date when he was doing the packet sniffing.

There are a number of other details that point back to that June 6 date. Perhaps most significant is that Iceland imaged a server Silk Road had earlier been using on June 6.

There are a total of 4 tarballs in the first item of discovery: home, var, all, and orange21 – all contained in .tar.gz files. The mtime for orange21.tar.gz is consistent with the July 23, 2013 image date. However, the other 3 tarballs have an mtime of June 6, 2013, as shown below22:

  • root 30720 Jun 6 2013 home.tar.gz
  • root 737095680 Jun 6 2013 var.tar.gz
  • root 1728276480 Jun 6 2013 all.tar.gz
  • root 22360048285 Jul 23 2013 orange21.tar.gz

The modification date of the tarballs is consistent with an imaging date of June 6, 2013, a full six weeks before the July 23, 2013, imaging of the .49 Server, a fact never mentioned in the Tarbell Declaration.

Though — as the defense points out — Tarbell didn’t mention that earlier imaging. He notes an earlier “lead” on the Silk Road server that resolved by May, and he notes that after Ulbricht’s arrest they obtained record of him noting leaks in the server.

5 After Ulbricht’s arrest, evidence was discovered on his computer reflecting that IP address leaks were a recurring problem for him. In a file containing a log Ulbricht kept of his actions in administering the Silk Road website, there are multiple entries discussing various leaks of IP addresses of servers involved in running the Silk Road website and the steps he took to remedy them.  For example, a March 25, 2013 entry states that the server had been “ddosd” – i.e., subjected to a distributed denial of service attack, involving flooding the server with traffic – which, Ulbricht concluded, meant “someone knew the real IP.” The entry further notes that it appeared someone had “discovered the IP via a leak” and that Ulbricht “migrated to a new server” as a result. A May 3, 2013 entry similarly states: “Leaked IP of webserver to public and had to redeploy/shred [the server].” Another entry, from May 26, 2013, states that, as a result of changes he made to the Silk Road discussion forum, he “leaked [the] ip [address of the forum server] twice” and had to change servers.

[snip]

7 Several months earlier, the FBI had developed a lead on a different server at the same Data Center in Iceland (“Server-1”), which resulted in an official request for similar assistance with respect to that server on February 28, 2013. See Ex. B. Due to delays in processing the request, Icelandic authorities did not produce traffic data for Server-1 to the FBI until May 2013. See Ex. A. By the time the FBI received the Server-1 traffic data, there was little activity on Server-1, indicating that it was no longer hosting a website. (As a result, the FBI did not request that Icelandic authorities proceed with imaging Server-1.) There was still some outbound Tor traffic flowing from Server-1, though, consistent with it being used as a Tor node; yet Server-1 was not included in the public list of Tor nodes, see supra n.4. Based on this fact, I believed, by the time of the June 12 Request, that the administrator of Silk Road was using Server-1 as a Tor “bridge” when connecting to the SR Server, as indicated in the June 12 Request. See Ex. A, at 1. (A Tor “bridge” is a private Tor node that can be used to access the Tor network, as opposed to using a
public Tor node that could be detected on one’s Internet traffic. See Tor: Bridges, available at http://torproject.org/docs/bridges.) To be clear, however, the traffic data obtained for Server-1 did not reflect any connection to, or otherwise lead to the identification of, the Subject IP Address. The Subject IP Address was independently identified solely by the means described above – i.e., by examining the traffic data sent back from the Silk Road website when we interacted with its user login interface.

The two other details that point to June 6 may not actually exonerate Ulbricht. Silk Road’s live-ssl config file was altered on June 7, which is the earliest date for the site configuration provided in discovery (though page 23 has some additional dates).

The mtime for the live-ssl configuration file provided in Item 1 of discovery is June 7, 2013, and the phpmyadmin configuration is July 6, 2013.8

8 Since Item 1 is the oldest image provided in discovery the defense does not have site configuration data prior to June 7, 2013.

And, as Horowitz reiterates, the earliest date for which the defense was provided discovery of a server imaging was June 6.

According to the government, the earliest image was captured June 6, 2013, and the latest in November 2013.

From a technical stand point, I’m not sure what to make of this.

A remarkable coincidence

It’s clear, however, that FBI was tracking Silk Road well before June, and for some reason decided to make June the official start date (and, perhaps more significantly, official discovery start date; they’ve refused earlier discovery because it won’ t be used in trial) of their investigation. At the same time, it seems that Ulbricht’s defense seems reluctant to explain why they’re asking for earlier discovery; perhaps that’s because they’d have to admit Ulbricht was aware of probes of the website before then. Forrest rejected their argument because Ulbricht refused to submit a declaration that this was his server.

But I am rather struck by the timing. As I said, the first Edward Snowden story — the June 5, 2013 Verizon release that could have no tie to the Silk Road investigation and, the next day, the WaPo and Guardian PRISM releases (there were very late Google and Facebook requests that seem like parallel construction, but since Ulbricht is a US citizen, his communications should not have been available via PRISM) — was roughly the day before the day Iceland imaged the other server.

I asked both Glenn Greenwald and Bart Gellman, and it seems the earliest the government could have had official notice of that story may have been late on June 4 though probably June 5 (things get funny with the Guardian, apparently, because of Greenwich Mean Time). A more relevant leak to the Silk Road investigation was the President’s Policy Directive on cyberwar — which Guardian published on June 7 (they may not have warned the government until that morning however).

So it may all be one big coincidence — that the government created a virgin birth for the Silk Road investigation that happened to be the same day that a torrent of leaks on the NSA and GCHQ started, ultimately revealing things like the government’s targeting of the Tor network (just days after Ulbricht was arrested on October 2, 2013).

But it certainly seems possible that those investigating Silk Road felt the need to begin to roll up the investigation as that torrent of leaks started, perhaps worrying that the methods they (or GCHQ) were using might be exposed before they had collected the evidence.

Update: A few more points about this. My suspicion is that, if there is a tie between the Snowden leaks and the Silk Road investigation, it stems from the government’s recognition that some of the methods it used to find Ulbricht would become known through Snowden’s leaks, so it moved to establish an alternate means of discovery before Ulbricht might learn of those actual methods. As one example, recall that subsequent to Snowden’s leaks about XKeyscore, Jacob Appelbaum got information showing XKeyscore tracks those who use Tor. While there are a number of things it seems Ulbricht’s lawyers believe were parallel constructed (unnamed “law enforcement officers” got warrants for his Gmail and Facebook accounts in September), they most aggressively fought the use of a Title III Pen Register to track IP addresses personally associated with Ulbricht, also in September. It seems that would have been available via other means, especially XKeyscore, especially since by encrypting communication Ulbricht’s communications could be retained indefinitely under NSA’s minimization procedures.

Additionally, the language the government used to refuse information on a range of law enforcement and spying agencies sure sounds like they clean teamed this investigation.

The Government also objects to the unbounded definition of the term “government” set forth in the September 17 Requests. Specifically, the requests ask the prosecution to search for information within “not only the United States Attorney’s Office for the Southern District of New York, but also the Offices in all other Districts, any and all government entities and law enforcement agencies, including but not limited to the Federal Bureau of Investigation, Central Intelligence Agency, Drug Enforcement Administration, Immigration and Customs Enforcement Homeland Security Investigations, National Security Agency, and any foreign government and/or intelligence agencies, particularly those with which the U.S. has a cooperative intelligence gathering relationship, i.e., Government Communications Headquarters (“GCHQ”), the British counterpart to the NSA.”

Even in the Brady context, the law is clear that a prosecutor has a duty to learn only of “evidence known to . . . others acting on the government’s behalf in the case.”

The government is not denying they had other means to identify Ulbricht (nor is it denying that it worked with partners like GCHQ on this). Rather, it is just claiming that the FBI officers involved in this prosecution didn’t see those methods.

The Other Blind Spot in NSA’s EO 12333 Privacy Report: Research

Yesterday, I laid out the biggest reason the NSA Privacy Officer’s report on EO 12333 was useless: she excluded most of NSA’s EO 12333 collection — its temporary bulk collection done to feed XKeyscore and its more permanent bulk collection done to hunt terrorists and most other NSA targets — from her report. Instead, Privacy Officer Rebecca Richards’ report only covered a very limited part of NSA’s EO 12333 spying, that targeting people like Angela Merkel.

But I wanted to circle back and note two other things she did which I find telling.

First, note what Richards didn’t do. The standard by which she measured NSA’s privacy efforts is a NIST standard called Fair Information Practice Principles, which include the following:

  • Transparency
  • Individual Participation
  • Purpose Specification
  • Data Minimization
  • Use Limitation
  • Data Quality and Integrity
  • Security
  • Accountability and Auditing

She dismisses the first two because NSA is a spook organization.

Because NSA has a national security mission, the principles of Transparency and Individual Participation are not implemented in the same manner they are in organizations with a more public facing mission.

In the process, she overstates how assiduously NSA lets Congress or DOJ review EO 12333 activities.

For the rest, however, Richards doesn’t — as she should have — assess NSA’s compliance with each category. Had she done so, she would have had to admit that PCLOB found NSA’s retention under the Foreign Intelligence purpose to be far too broad, putting NSA in violation of Purpose Specification; she would have had to admit that NSA gets around Use Limitation with broad permissions to create technical databases and keep all encrypted communications; she would have had to admit that of NSA’s violations, 9% constitute a willful refusal to follow Standard Operating Procedures, a stat that would seem to belie her Accountability claims.

Rather than assessing whether NSA complies with these principles, then, Richards simply checks them off at the end of each of several sections on the SIGINT Production Cycle.

ACQUIRE, Targeting: “The existing civil liberties and privacy protections fall into the following FIPPs: Transparency (to overseers), Purpose Specification, and Accountability and Auditing.”

ACQUIRE, Collection and Processing: “The existing civil liberties and privacy protections fall into three FIPPs categories: Data Minimization, Purpose Specification and Accounting and Auditing.”

ANALYZE: “These existing civil liberties and privacy protections fall into the following FIPPs: Transparency (to overseers), Purpose Specification, Data Minimization, and Accountability and Auditing.”

RETAIN: “These existing civil liberties and privacy protections fall into the following two FIPPs: Data Minimization, and Security.”

DISSEMINATE: “The existing civil liberties and privacy protections fall into the following FIPPs: Use Limitations, Data Minimization, and Accountability and Auditing.”

Then, having laid out how the NSA does some things that fall into some of these boxes at each step of the SIGINT process, she concludes,

CLPO documented NSA’s multiple activities that provide civil liberties and privacy protections for six of the eight FIPPs that are underpinned by its management activities, documented compliance program, and investments in people, training, tools, and technology.

Fact check! Even buying her claim that checking the box for some of these things at each step of the process is adequate to assessing whether it fulfills FIPP, note that she hasn’t presented any evidence NSA meets NIST’s “Data Quality and Integrity” claim (though that may just be sloppiness on her part, a further testament to the worthlessness of this review).

But there’s another huge problem with this approach.

By fulfilling her privacy review by checking the boxes for the SIGINT Production Cycle (just for the targeted stuff, remember, not for the bulk of what NSA does), Richards leaves out all the other things the NSA does with the world’s data. Most notably, she doesn’t consider the privacy impacts of NSA’s research — what is called SIGDEV — which NSA and its partners do with live data. Some of the most aggressive programs revealed by Edward Snowden’s leaks — especially to support their hacking and infiltration activities — were SIGDEV presentations. Even on FISA programs, SIGDEV is subjected to nowhere near the amount of auditing that straight analysis is.

And the most significant known privacy breach in recent years involved the apparent co-mingling of 3,000 files worth of raw Section 215 phone dragnet data with Stellar Wind data on a research server. NSA destroyed it all before anyone could figure out what it was doing there, how it got there, or what scope “3,000” files entailed.

In my obsessions with the poor oversight over the phone dragnet techs, I have pointed to this description several times.

As of 16 February 2012, NSA determined that approximately 3,032 files containing call detail records potentially collected pursuant to prior BR Orders were retained on a server and been collected more than five years ago in violation of the 5-year retention period established for BR collection. Specifically, these files were retained on a server used by technical personnel working with the Business Records metadata to maintain documentation of provider feed data formats and performed background analysis to document why certain contact chaining rules were created. In addition to the BR work, this server also contains information related to the STELLARWIND program and files which do not appear to be related to either of these programs. NSA bases its determination that these files may be in violation of BR 11-191 because of the type of information contained in the files (i.e., call detail records), the access to the server by technical personnel who worked with the BR metadata, and the listed “creation date” for the files. It is possible that these files contain STELLARWIND data, despite the creation date. The STELLARWIND data could have been copied to this server, and that process could have changed the creation date to a timeframe that appears to indicate that they may contain BR metadata.

The NSA just finds raw data mingling with data from the President’s illegal program. And that’s all the explanation we get for why!

Well, PCLOB provides more explanation for why we don’t know what happened with that data.

In one incident, NSA technical personnel discovered a technical server with nearly 3,000 files containing call detail records that were more than five years old, but that had not been destroyed in accordance with the applicable retention rules. These files were among those used in connection with a migration of call detail records to a new system. Because a single file may contain more than one call detail record, and because the files were promptly destroyed by agency technical personnel, the NSA could not provide an estimate regarding the volume of calling records that were retained beyond the five-year limit. The technical server in question was not available to intelligence analysts.

This is actually PCLOB being more solicitous in other parts of the report. After all, it’s not just that there was a 5 year data retention limit on this data, there was also a mandate that techs destroy data once they’re done fiddling with it. So this is a double violation.

And yet NSA’s response to finding raw data sitting around places is to destroy it, making it all the more difficult to understand what went on with it?

Richards may be referring to this kind of oopsie when she talks about “spillage” being a risk related to retention.

The civil liberties and privacy risks related to retention are that NSA (1) may possibly retain data that it is no longer authorized to retain; (2) may possibly fail to completely remove data the Agency was not authorized to acquire; and (3) may potentially lose data because of “spillage,” improper intentional disclosure, or malicious exfiltration.

But nowhere does she consider the privacy implications of having a “technical database” data retention exemption even for Section 702 data, and then subjecting that raw data to the most exotic projects NSA’s research staff can think of.

And given that she elsewhere relies on President Obama’s PPD-28 as if it did anything to protect privacy, note that that policy specifically exempts SIGDEV from its limits.

Unless otherwise specified, this directive shall apply to signals intelligence activities conducted in order to collect communications or information about communications, except that it shall not apply to signals intelligence activities undertaken to test or develop signals intelligence capabilities.

We know NSA doesn’t abide by privacy rules for its research function. Not only does that mean a lot of probably legitimate research evades scrutiny, it also creates a space where NSA can conduct spying, in the name of research, that wouldn’t fulfill any of these privacy protections.

That’s a glaring privacy risk. One she chooses not to mention at all in her report.

NSA’s Privacy Officer Exempts Majority of NSA Spying from Her Report on EO 12333 Collection

NSA’s Director of Civil Liberties and Privacy, Rebecca Richards, has another report out, this time on “Civil Liberties and Privacy Protections” provided in the Agency’s EO 12333 programs. As with her previous report on Section 702, this one is almost useless from a reporting standpoint.

The reason why it is so useless is worth noting, however.

Richards describes the scope of her report this way:

This report examines (1) NSA’s Management Activities that are generally applied throughout the Agency and (2) Mission Safeguards within the SIGINT mission when specifically conducting targeted3 SIGINT activities under E.O. 12333.

3 In the context of this paper, the phrase “targeted SIGINT activities” does not include “bulk” collection as defined in Presidential Policy Directive (PPD)-28. Footnote 5 states, in part, “References to signals intelligence collected in ‘bulk’ mean the authorized collection of large quantities of signals intelligence data which, due to technical or operational considerations, is acquired without the use of discriminants (e.g., specific identifiers, selection terms, etc.).”

Richards neglects to mention the most important details from PPD-28 on bulk collection: when collection in “bulk” is permitted.

Locating new or emerging threats and other vital national security information is difficult, as such information is often hidden within the large and complex system of modern global communications. The United States must consequently collect signals intelligence in bulk5 in certain circumstances in order to identify these threats. Routine communications and communications of national security interest increasingly transit the same networks, however, and the collection of signals intelligence in bulk may consequently result in the collection of information about persons whose activities are not of foreign intelligence or counterintelligence value. The United States will therefore impose new limits on its use of signals intelligence collected in bulk. These limits are intended to protect the privacy and civil liberties of all persons, whatever their nationality and regardless of where they might reside.

In particular, when the United States collects nonpublicly available signals intelligence in bulk, it shall use that data only for the purposes of detecting and countering: (1) espionage and other threats and activities directed by foreign powers or their intelligence services against the United States and its interests; (2) threats to the United States and its interests from terrorism; (3) threats to the United States and its interests from the development, possession, proliferation, or use of weapons of mass destruction; (4) cybersecurity threats; (5) threats to U.S. or allied Armed Forces or other U.S or allied personnel; and (6) transnational criminal threats, including illicit finance and sanctions evasion related to the other purposes named in this section. In no event may signals intelligence collected in bulk be used for the purpose of suppressing or burdening criticism or dissent; disadvantaging persons based on their ethnicity, race, gender, sexual orientation, or religion; affording a competitive advantage to U.S. companies and U.S. business sectors commercially; or achieving any purpose other than those identified in this section.

5 The limitations contained in this section do not apply to signals intelligence data that is temporarily acquired to facilitate targeted collection. References to signals intelligence collected in “bulk” mean the authorized collection of large quantities of signals intelligence data which, due to technical or operational considerations, is acquired without the use of discriminants (e.g., specific identifiers, selection terms, etc.).

The NSA collects in “bulk” (that is, “everything”), temporarily, to facilitate targeted collection. This refers to the 3-5 day retention of all content and 30 day retention of all metadata from some switches so XKeyscore can sort through it to figure out what to keep.

And the NSA also collects in “bulk” (that is, “everything”) to hunt for the following kinds of targets:

  • Spies
  • Terrorists
  • Weapons proliferators
  • Hackers and other cybersecurity threats
  • Threats to armed forces
  • Transnational criminals (which includes drug cartels as well as other organized crime)

Of course, when NSA collects in “bulk” (that is, “everything”) to hunt these targets, it also collects on completely innocent people because, well, it has collected everything.

So at the start of a 17-page report on how many “civil liberties and privacy protections” the NSA uses with its EO 12333 collection, NSA’s Privacy Officer starts by saying what she’s about to report doesn’t apply to NSA’s temporary collection of  everything to sort through it, nor does it apply to its more permanent collection of everything to hunt for spies, terrorists, weapons proliferators, hackers, and drug bosses.

That is, the “civil liberties and privacy protections” Richards describe don’t apply to the the great majority of what NSA does. And these “civil liberties and privacy protections” don’t apply until after NSA has collected everything and decided, over the course of 5 days, whether it wants to keep it and in some places, kept everything to be able to hunt a range of targets.

This actually shows up in Richards’ report, subtly, at times, as when she emphasizes that her entire “ACQUIRE” explanation focuses on “targeted SIGINT collection.” What that means, of course, is that process, where collecting only takes place after an NSA analyst has targeted the collection? It doesn’t happen in the majority of cases.

Once you collect and sort through everything, does it really make sense to claim you’re providing civil liberties and privacy protections?

image_print