Update on Lavabit

I’ve been trying to keep an eye on the public information about the government’s demand on Lavabit. And in a new interview with Ars Technica, Ladar Levison basically gives us a multiple choice guess on what the request was: either altering the source code or turning over the private key securing his HTTPS certificate.

Levison said he has always known Lavabit safeguards could be bypassed if government agents took drastic measures, or as he put it, “if the government was willing to sacrifice the privacy of many to conduct surveillance on the few.” For instance, if he was forced to change the code used when a user logs in, his system could capture the plain-text password needed to decrypt stored e-mails. Similarly, if he was ever forced to turn over the private encryption key securing his site’s HTTPS certificate, government agents tapping a connection could observe the password as a user was entering it. But it was only in the past few weeks that he became convinced those risks were realistic.

“I don’t know if I’m off my rocker, but 10 years ago, I think it would have been unheard of for the government to demand source code or to make a change to your source code or to demand your SSL key,” Levison told Ars. “What I’ve learned recently makes me think that’s not as crazy an assumption as I thought.”

I and others have suggested this (whichever of these options this demand took) is basically CALEA II — FBI’s repeated demands that it have a back door into anything — before its time.

But Congress has not yet authorized CALEA II. So why did the (presumably) FISA Court approve this demand?

Share this entry

As with Manning Leak, Snowden Leak Reveals DOD Doesn’t Protect Security

MSNBC has an update to the continuing saga of “Omigod the NSA has inadequate security.” It explains why the “thin client” system the NSA had (one source calls it 2003 technology) made it so easy for Edward Snowden to take what he wanted.

In a “thin client” system, each remote computer is essentially a glorified monitor, with most of the computing power in the central server. The individual computers tend to be assigned to specific individuals, and access for most users can be limited to specific types of files based on a user profile.

But Snowden was not most users.

[snip]

As a system administrator, Snowden was allowed to look at any file he wanted, and his actions were largely unaudited. “At certain levels, you are the audit,” said an intelligence official.

He was also able to access NSAnet, the agency’s intranet, without leaving any signature, said a person briefed on the postmortem of Snowden’s theft. He was essentially a “ghost user,” said the source, making it difficult to trace when he signed on or what files he accessed.

If he wanted, he would even have been able to pose as any other user with access to NSAnet, said the source.

The story goes on to note that being in Hawaii would have allowed Snowden to access Fort Meade’s computers well after most users were gone.

I’m particularly interested in the assertion that Snowden could pose as any other user with access to NSAnet.

Any other user. Presumably, that includes at least Cybercommander Keith Alexander’s aides.

In a world in which the NSA is increasingly an offensive organization, certain figures within NSA would be engaged in some very interesting communications and compartments, I’d imagine.

Ah well. The US won’t learn. They’ll continue to neglect these holes until someone publicly demonstrates their negligence, all the while leaving them open for whatever paid agents of foreign governments choose to exploit them.

Share this entry

Are the Brits Trying to Protect British Telecom?

In addition to her latest stories describing the generalized spying the NSA and GCHQ engage in, Laura Poitras today also tells her side of the David Miranda story. In it, she reveals the hard drives destroyed at the Guardian included details on Tempora.

Included on those drives were documents detailing GCHQ’s massive domestic spying program called “Tempora.”

This program deploys NSA’s XKeyscore “DeepDive” internet buffer technology which slows down the internet to allow GCHQ to spy on global communications, including those of UK citizens. Tempora relies on the “corporate partnership” of UK telecoms, including British Telecommunications and Vodafone. Revealing the secret partnerships between spy agencies and telecoms entrusted with the private communications of citizens is journalism, not terrorism.

It seems she’s trying to suggest that the Brits are trying to protect this program, specifically. Which would protect not just a spying technique (collecting data off the switches), but also the involvement of BT and Vodafone.

Remember, that weird Independent story from last week (which Snowden made clear did not come from him) also included details about BT and Vodaphone’s roles in this spying.

The Government also demanded that the paper not publish details of how UK telecoms firms, including BT and Vodafone, were secretly collaborating with GCHQ to intercept the vast majority of all internet traffic entering the country. The paper had details of the highly controversial and secret programme for over a month. But it only published information on the scheme – which involved paying the companies to tap into fibre-optic cables entering Britain – after the allegations appeared in the German newspaper Süddeutsche Zeitung.

It makes sense. Even in the US, even in the materials released so far, both the Guardian and Washington Post have protected the role that AT&T and Verizon play in this process.

The Independent story also mentioned a secret British spying base in the Middle East that played a role in Tempora.

One of the areas of concern in Whitehall is that details of the Middle East spying base which could identify its location could enter the public domain.

The data-gathering operation is part of a £1bn internet project still being assembled by GCHQ. It is part of the surveillance and monitoring system, code-named “Tempora”, whose wider aim is the global interception of digital communications, such as emails and text messages.

[snip]

The Middle East station was set up under a warrant signed by the then Foreign Secretary David Miliband, authorising GCHQ to monitor and store for analysis data passing through the network of fibre-optic cables that link up the internet around the world.

That part of the story made me remember Reprieve’s claims from earlier this year that British Telecom played a role in drone targeting in Djibouti.

BT’s slogan used to be ‘it’s good to talk’, but when it comes to contracts with the US military ‘it’s best to keep your mouth shut’ might be more appropriate.

Earlier this year Reprieve obtained evidence that BT had been awarded a contract worth over $23 million by the US Defense Information Systems Agency to provide communications infrastructure connecting US-run RAF Croughton in Northamptonshire with the secretive Camp Lemonnier in Djibouti.

Read more

Share this entry

How the NCTC Gets Its NSA Data

I’m working on a more substantive post on the Section 702 Semiannual Compliance Assessment released last week as part of the I Con dump.

But for the moment, I want to point to a passage that begins to answer a question I asked two months ago: how does the data from NSA’s programs get to the National Counterterrorism Center, which then crunches that data and sends it out to other parts of government.

A footnote of the Assessment notes,

The other agency involved in implementing Section 702 is the National Counterterrorism Center (NCTC), which has a limited role, as reflected in the recently approved “Minimization Procedures Used by NCTC in connection with Information Acquired by the FBI pursuant to Section 702 of FISA, as amended.” Under these limited minimization procedures, NCTC is not authorized to receive unminimized Section 702 data. Rather, these procedures recognize that, in light of NCTC’s statutory counterterrorism role and mission, NCTC has been provided access to certain FBI systems containing minimized Section 702 information, and prescribe how NCTC is to treat that information. For example, because NCTC is not a law enforcement agency, it may not receive disseminations of Section 702 information that is evidence of a crime, but which has no foreign intelligence value; accordingly, NCTC’s minimization procedures require in situations in which NCTC personnel discover purely law enforcement information with no foreign intelligence value in the course of reviewing minimized foreign intelligence information that the NCTC personnel either purge that information (if the information has been ingested into NCTC systems) or not use, retain, or disseminate the information (if the information has been viewed in FBI systems). No incidents of noncompliance with the NCTC minimization procedures were identified during this reporting period. The joint oversight team will be assessing NCTC’s compliance with its minimization procedures in the next reporting period.

This passage has some good news, and some bad news.

The good news is that NCTC gets no unminimized collection, which CIA and FBI do. We have no idea what FBI’s minimization procedures  (which does the minimization before NCTC gets it) look like — though elsewhere this Assessment makes it clear that most initial distributions of data from FBI come with US person identity hidden. But at least most US person data will be protected when NCTC gets it.

The bad news is that this is a recent development. It probably post-dates 2011, as John Bates makes no mention of NCTC’s minimization procedures in his October 3, 2011. And the reference to the compliance team reviewing this in the next Assessment (which would cover December 2012 through May 2013) suggests the minimization procedures may be very recent. What has happened with this data in the past?

And explain to me how, if NCTC “may not” receive that US person data that has been referred to FBI because it is evidence of a non-terrorist crime, its minimization procedures explain what to do if they happen to discover such data in their possession. Perhaps the problem is in processing that takes place at FBI (in that such information isn’t adequately segregated), not at NCTC?

Remember, much of the analysis that happens at NCTC can affect US person’s lives, but (unlike much of FBI’s work) doesn’t get reviewed by a court. The data that gets to them might well be particularly sensitive.

Share this entry

The Google/Yahoo Problem: Fruit of the Poison MCT?

OK, this will be my last post (at least today) to attempt to understand why some Internet providers incurred so many costs associated with the response to the FISA Court’s October 3, 2011 decision that the government had improperly collected US person data as part of Multiple Communication Transactions.

For the moment, I’m going to bracket the question of whether Google and Yahoo are included in upstream providers (though I think it more likely for Google than Yahoo). Footnote 3 in the October 3 opinion seems to distinguish upstream collection from collection from Internet service providers. Though note the entirely redacted sentence in that footnote that may modify that easy distinction.

But let’s consider how the violative data might be used. We know from the conference call the I Cons had the other day (you can listen along here) that this is primarily about getting email inboxes.

An intelligence official who would not be identified publicly described the problem to reporters during a conference call on Wednesday.

“If you have a webmail email account, like Gmail or Hotmail, you know that if you open up your email program, you will get a screenshot of some number of emails that are sitting in your inbox, the official said.

“Those are all transmitted across the internet as one communication. For technological reasons, the NSA was not capable of breaking those down, and still is not capable, of breaking those down into their individual [email] components.”

If one of those emails contained a reference to a foreign person believed to be outside the US – in the subject line, the sender or the recipient, for instance – then the NSA would collect the entire screenshot “that’s popping up on your screen at the time,” the official continued.

Now, whether or not this collection comes from the telecoms or the Internet companies themselves, it effectively serves as an index of Internet communications deemed interesting based on the participants or because the email talks about an approved selector.

But it may be that this upstream collection serves primarily to identify which content the government wants to collect.

In his November 30, 2011 opinion, Bates emphasized (see page 10) the limits on what analysts could do with properly segregated upstream MCTs in the future.

An analyst seeking to use (e.g., in a FISA application, in an intelligence report, or in a Section 702 targeting decision) a discrete communication within an Internet transaction that contains multiple discrete communications must document each of the determinations. [my emphasis]

Then, the September 25, 2012 opinion describes how, using threats that he would declare the previous collection a crime under 1809(a)(2), which prohibits the “disclosure” of any information collected illegally, Judge John Bates got the government purge that previous collection and any reports generated from it.

The government informed the Court in October 2011 that although the amended NSA procedures do not by their terms apply to information acquired before October 31, NSA would apply portions of the procedures to the past upstream collection, including certain limitations on the use or disclosure of such information.

That effort, according to Bates, did not begin until “late in 2011.”

But here’s the thing: the government would have “disclosed” this information to email providers if it had used any of the violative MCTs to target emails in their custody — the Section 702 targeting decisions Bates was explicitly concerned about.

So presumably, once Bates made it clear he considered 1809 violations real problems in November 2011, the government would have had to modify any certifications authorizing collection on email addresses identified through the violative upstream collection (regardless of source).

I don’t yet understand why, in adjusting to a series of modified certifications, the providers would incur millions of dollars of costs. But I think expunging poison fruit targeting orders from the certifications would have taken some time and multiple changed certifications.

Update: Footnote 24 in the October 3, 2011 opinion provides more clarity on whether PRISM collection includes MCTs; it doesn’t.

In addition to its upstream collection, NSA acquires discrete Internet communications from Internet service providers such as [redacted] Aug. 16 Submission at 2; Aug. 30 Submission at 11; see also Sept. 7 2011 Hearing Tr. at 75-77. NSA refers to this non-upstream collection as its “PRISM collection.” Aug. 30 Submission at 11. The Court understands that NSA does not acquire Internet transactions” through its PRISM collection. See Aug Submission at 1.

Share this entry

Upstream Internet Collection and Minimization Procedures

new-prism-slide-001-460x345As I noted in this post, the Guardian’s report on the aftermath of the October 3, 2011 FISA Court decision seemed to suggest that Google and Yahoo content was collected as upstream collection, not from their servers.

Changes made in the minimization procedures seem to support that.

In section 3(c), which covers Destruction of Raw Data, the old procedures treat all communications the same:

Communications and other information … will be reviewed for retention in accordance with the standards set forth in these procedures.

But the new minimization procedures have to break out that section into two categories to comply with the new restrictions imposed by the FISA Court. There’s the category of data that will be treated under the old rules:

Telephony communications, Internet communications acquired by or with the assistance of the Federal Bureau of Investigation from Internet Service Providers, and other discrete forms of information…

And then there’s the category that will be subjected to the new rules:

Internet transactions acquired through NSA’s upstream collection techniques …

Now, this doesn’t confirm that Google and Yahoo are providing “upstream” data, but if they’re not, it means the only data they’re providing to the NSA is done through FBI requests (perhaps parallel to FBI’s Section 215 request for telephone metadata that gets promptly delivered to the NSA; this could refer to the old Pen Register/Trap and Trace Internet collection, but October 31, 2011 is awfully late in 2011 for eliminating that collection and if it is, why is it still in the minimization procedures?). Except all the discussions surrounding PRISM suggests that data is turned over directly to the NSA, which would mean it is considered upstream collection.

One more note: the old procedures have a phrase in this section and section 3(b)(1) that suggests NSA knew they were collecting US person data back in 2009 when the procedures were written.

The communications that may be retained include electronic communications acquired because of limitations on NSA’s ability to filter communications.

That sentence is removed from the new procedures, suggesting this “limitations on NSA’s ability to filter communications” collection is precisely the Internet transaction collection at issue. And the only reason they’d have to specifically allow themselves to retain it before (since all foreign person data can be retained) is if they knew it included US person data.

Update: Correction: The sentence above gets translated to, “The Internet transactions that may be retained include those that were acquired because of limitations on NSA’s ability to filter communications.” So it is in there.

But the November 30, 2011 FISC opinion (see footnote 6) makes it clear that this is–and was–US person data.

The Court understands this sentence to refer only to Internet transactions that contain wholly domestic communications but that are not recognized as such by NSA.

So if that language was in minimization procedures going back to at least 2009, doesn’t that mean the government knew it was collecting that US person data?

Update: Note that footnote 24 of the October 3, 2011 opinion seems to make it clear that the Internet collection is not upstream at all, and doesn’t include MCTs.

In addition to its upstream collection, NSA acquires discrete Internet communications from Internet service providers such as [redacted] Aug. 16 Submission at 2; Aug. 30 Submission at 11; see also Sept. 7 2011 Hearing Tr. at 75-77. NSA refers to this non-upstream collection as its “PRISM collection.” Aug. 30 Submission at 11. The Court understands that NSA does not acquire Internet transactions” through its PRISM collection. See Aug Submission at 1.

Share this entry

More Contractor Problems — And FISC Disclosure Problems?

In the updated minimization procedures approved in 2011, the NSA added language making clear that the procedures applied to everyone doing analysis for NSA.

For the purposes of these procedures, the terms “National Security Agency” and “NSA personnel” refer to any employees of the National Security Agency/Central Security Service (“NSA/CSS” or “NSA”) and any other personnel engaged in Signals Intelligence (SIGINT) operations authorized pursuant to section 702 of the Act if such operations are executed under the direction, authority, or control of the Director, NSA/Chief, CSS (DIRNSA).

It told the FISA Court it needed this language to make it clear that militarily-deployed NSA personnel also had to abide by them.

The government has added language to Section 1 to make explicit that the procedures apply not only to NSA employees, but also to any other persons engaged in Section 702-related activities that are conducted under the direction, authority or control of the Director of the NSA. NSA Minimization Procedures at 1. According to the government, this new language is intended to clarify that Central Security Service personnel conducting signals intelligence operations authorized by Section 702 are bound by the procedures, even when they are deployed with a military unit and subject to the military chain of command.

But to me both these passages rang alarms about contractors. Did they have to include this language, I wondered, because contractors in the past had claimed not to be bound by the same rules NSA’s direct employees were?

Lo and behold the Bloomberg piece reporting that NSA’s IG undercounts deliberate violations by roughly 299 a year includes this:

The actions, said a second U.S. official briefed on them, were the work of overzealous NSA employees or contractors eager to prevent any encore to the Sept. 11, 2001, terrorist attacks.

It sure seems that at least some of the worst violations — the ones even NSA’s IG will call intentional — were committed by contractors. Which suggests I may be right about the inclusion of that language to make it clear it applies to contractors.

If that’s the case, then why did NSA tell the FISA Court this new language was about militarily-deployed NSA employees, and not about contractors?

 

Share this entry

NSA’s Inspector General Appears to Be Disappearing 299 Deliberate Violations a Year

Bloomberg is getting a lot of attention for reporting the results of a still-classified (and unleaked) NSA Inspector General audit showing that NSA averages one rule violation a year.

Some National Security Agency analysts deliberately ignored restrictions on their authority to spy on Americans multiple times in the past decade, contradicting Obama administration officials’ and lawmakers’ statements that no willful violations occurred.

[snip]

The incidents, chronicled in a new report by the NSA’s inspector general, provide more evidence that U.S. agencies sometimes have violated legal and administrative restrictions on domestic spying, and may add to the pressure to bolster laws that govern intelligence activities.

The inspector general documented an average of one case per year over 10 years of intentionally inappropriate actions by people with access to the NSA’s vast electronic surveillance systems, according to an official familiar with the findings. The incidents were minor, the official said, speaking on the condition of anonymity to discuss classified intelligence. [my emphasis]

Now, perhaps the IG is using the rule laid out by Barton Gellman saying that intentionally inappropriate action that serves “the mission” isn’t an intentionally inappropriate action.

If they are performing the mission that the NSA wants them to perform, and nevertheless overstep their legal authority, make unauthorized interceptions or searches or retentions or sharing of secret information, that is not abuse, that’s a mistake.

But this seems to be another example of NSA’s funny math.

Because the NSA’s own internal count of such violations suggests there would be closer to 300 such violations a year (counting just those deemed a lack of due diligence). The 772 violations for the S2 Directorate in the first quarter of 2011 represented 89% of all NSA’s violations that quarter; if their 68 due diligence violations represented 89% of all due diligence violations (S2’s rate for due diligence violations is lower than the two other categories broken out), you’d expect 76 each quarter, or just over 300 a year.

So whereas the NSA is telling itself that there are 300 examples a year where someone doesn’t follow rules — not because they don’t know them (those are training violations) or because they make a data entry error (those are human error), but something else — it is telling Congress there is just one example a year.

Poof! Magic math.

Update: If Kimberly Dozier got it too, it’s an official leak.

They apparently don’t fire people who use all these spy tools to spy on their exes.

Two U.S. officials said one analyst was disciplined in years past for using NSA resources to track a former spouse. The officials spoke on condition of anonymity because they were not authorized to speak publicly.

Share this entry

How to Get the Government to Ease Up: Involve Scott Shane

This is fairly extraordinary. BuzzFeed reports that in an effort to alleviate some of the pressure from the UK it is bringing in the NYT — but just one reporter from the NYT — to report on the Snowden stories.

“In a climate of intense pressure from the UK government, The Guardian decided to bring in a US partner to work on the GCHQ documents provided by Edward Snowden,” Guardian spokeswoman Jennifer Lindenauer said in an email. “We are continuing to work in partnership with the NYT and others to report these stories.”

That reporter is not James Risen — who of course broke the original NSA story with Eric Lichtblau. It is not Charlie Savage — who had an important story based on the Snowden leaks already.

It is Scott Shane.

The Times’s Charlie Savage and other reporters have chased the NSA story aggressively, despite Snowden’s choice to go to fillmmaker Laura Poitras, theGuardian’s Glenn Greenwald, and Barton Gellman, who has written about the documents for the Washington Post. Snowden said he did not go to the Timesbecause the paper bowed to Bush Administration demands to delay a story on warrantless wiretapping in the interest of national security; he was afraid, he said, the paper would do the same with his revelations.

Now, Times reporter Scott Shane is at work on a series of stories expected to be published next month jointly with the Guardian, a source familiar with the plans said. The source said the internal arrangement has also been the cause of some tension in the newsroom, as other national security reporters working on the NSA story — Savage and James Risen, among others — are not centrally involved in stories based on the Guardian’s documents.

Scott Shane has an increasingly consistent ability to tell grand tales that serve the interests of The Powers that Be. And somehow his stories about extremely sensitive subjects like drones don’t get chased for leaks.

Was the alleviation of pressure tied to Scott Shane in particular, a journalist who hasn’t followed this story as closely as some of his colleagues?

Share this entry

Why Would PRISM Providers Need to Pay Millions for New Certificates on Upstream Collection?

new-prism-slide-001-460x345The Guardian has a story that rebuts the happy tales about quick compliance being told about the October 3, 2011 and subsequent FISA Court opinions. Rather than implementing a quick fix to the Constitutional violations John Bates identified, the government actually had to extend some of the certifications multiple times, resulting in millions of dollars of additional costs. It cites a newsletter detailing the extension.

Last year’s problems resulted in multiple extensions in the Certifications’ expiration dates which cost millions of dollars for PRISM providers to implement each successive extension — costs covered by Special Source Operations.

The problem may have only affected Yahoo and Google, as an earlier newsletter — issued sometime before October 2 and October 6, 2011 — suggested they were the only ones that had not already been issued new (as opposed to extended) certificates. Moreover, Guardian’s queries suggested that Yahoo did need an extension, Facebook didn’t, and Google (and Microsoft) didn’t want to talk about it.

A Yahoo spokesperson said: “Federal law requires the US government to reimburse providers for costs incurred to respond to compulsory legal process imposed by the government. We have requested reimbursement consistent with this law.”

Asked about the reimbursement of costs relating to compliance with Fisa court certifications, Facebook responded by saying it had “never received any compensation in connection with responding to a government data request”.

Google did not answer any of the specific questions put to it, and provided only a general statement denying it had joined Prism or any other surveillance program. It added: “We await the US government’s response to our petition to publish more national security request data, which will show that our compliance with American national security laws falls far short of the wild claims still being made in the press today.”

Microsoft declined to give a response on the record.

Here’s the larger question. PRISM is downstream collection, as the slide above makes clear, collection directly from a company’s servers. The problems addressed in the FISC opinion had to do with upstream collection.

We have always talked about upstream collection in terms of telecoms collecting data directly from switches.

But this all suggests that Google and Yahoo provide upstream data, as well.

I’ll have more to say about what this probably means in a follow-up. But for the moment, just consider that it suggests at least Google and Yahoo — both email providers — may be providing upstream data in addition to whatever downstream collection they turn over.

Update: See this post, in which I suggest that Google and Yahoo had problems not because of their own upstream collection (if either does any), but because certifications to them included targeting orders based on violated MCT collection that had to be purged out of the system.

Update: Softened verb in last sentence — perhaps they aren’t. But I suspect they are.

Update: Footnote 24 makes a pretty clear distinction between the upstream and PRISM collection.

In addition to its upstream collection, NSA acquires discrete Internet communications from Internet service providers such as [redacted] Aug. 16 Submission at 2; Aug. 30 Submission at 11; see also Sept. 7 2011 Hearing Tr. at 75-77. NSA refers to this non-upstream collection as its “PRISM collection.” Aug. 30 Submission at 11. The Court understands that NSA does not acquire Internet transactions” through its PRISM collection. See Aug Submission at 1.

Share this entry