Posts

The Unnamed Network Provider Exposing our Infrastructure

Today was Global Threat day, when James Clapper testifies before various committees in Congress and Ron Wyden asks uncomfortable questions (today, directed exclusively at John Brennan). I’ll have a few posts about the hearings (in Senate Armed Services and Senate Intelligence Committees) and Clapper’s testimony, the SASC version of which is here.

One interesting detail in Clapper’s testimony comes in the several paragraph section on Infrastructure within a larger section on “Protecting Information Resources.” Here’s how the testimony describes the Juniper hack.

A major US network equipment manufacturer acknowledged last December that someone repeatedly gained access to its network to change source code in order to make its products’ default encryption breakable. The intruders also introduced a default password to enable undetected access to some target networks worldwide.

There’s no discussion of how many Federal agencies use Juniper’s VPN, nor of how this must have exposed US businesses (unless the NSA clued them into the problem). And definitely no discussion of the assumption that NSA initially asked for the back door that someone else subsequently exploited.

More importantly, there’s no discussion of the cost of this hack, which I find interesting given that it may be an own goal.

What We Know about the Section 215 Phone Dragnet and Location Data

Last month’s squabble between Marco Rubio and Ted Cruz about USA Freedom Act led a number of USAF boosters to belatedly understand what I’ve been writing for years: that USAF expanded the universe of people whose records would be collected under the program, and would therefore expose more completely innocent people, along with more potential suspects, to the full analytical tradecraft of the NSA, indefinitely.

In an attempt to explain why that might be so, Julian Sanchez wrote this post, focusing on the limits on location data collection that restricted cell phone collection. Sanchez ignores two other likely factors — the probable inclusion of Internet phone calls and the ability to do certain kinds of connection chaining — that mark key new functionalities in the program which would have posed difficulties prior to USAF. But he also misses a lot of the public facts about location collection and cell phones under the Section 215 dragnet.  This post will lay those out.

The short version is this: the FISC appears to have imposed some limits on prospective cell location collection under Section 215 even as the phone dragnet moved over to it, and it was not until August 2011 that NSA started collecting cell phone records — stripped of location — from AT&T under Section 215 collection rules. The NSA was clearly getting “domestic” records from cell phones prior to that point, though it’s possible they weren’t coming from Section 215 data. Indeed, the only known “successes” of the phone dragnet — Basaaly Moalin and Adis Medunjanin — identified cell phones. It’s not clear whether those came from EO 12333, secondary database information that didn’t include location, or something else.

Here’s the more detailed explanation, along with a timeline of key dates:

There is significant circumstantial evidence that by February 17, 2006 — two months before the FISA Court approved the use of Section 215 of the PATRIOT Act to aspire to collect all Americans’ phone records — the FISA Court required briefing on the use of “hybrid” requests to get real-time location data from targets using a FISA Pen Register together with a Section 215 order. The move appears to have been a reaction to a series of magistrates’ rulings against a parallel practice in criminal cases. The briefing order came in advance of the 2006 PATRIOT Act reauthorization going into effect, which newly limited Section 215 requests to things that could be obtained with a grand jury subpoena. Because some courts had required more than a subpoena to obtain location, it appears, FISC reviewed the practice in the FISC — and, given the BR/PR numbers reported in IG Reports, ended, sometime before the end of 2006 though not immediately.

The FISC taking notice of criminal rulings and restricting FISC-authorized collection accordingly would be consistent with information provided in response to a January 2014 Ron Wyden query about what standards the FBI uses for obtaining location data under FISA. To get historic data (at least according to the letter), FBI used a 215 order at that point. But because some district courts (this was written in 2014, before some states and circuits had weighed in on prospective location collection, not to mention the 11th circuit ruling on historical location data under US v. Davis) require a warrant, “the FBI elects to seek prospective CSLI pursuant to a full content FISA order, thus matching the higher standard imposed in some U.S. districts.” In other words, as soon as some criminal courts started requiring a warrant, FISC apparently adopted that standard. If FISC continued to adopt criminal precedents, then at least after the first US v. Davis ruling, it would have and might still require a warrant (that is, an individualized FISA order) even for historical cell location data (though Davis did not apply to Stingrays).

FISC doesn’t always adopt the criminal court standard; at least until 2009 and by all appearances still, for example, FISC permits the collection, then minimization, of Post Cut Through Dialed Digits collected using FISA Pen Registers, whereas in the criminal context FBI does not collect PCTDD. But the FISC does take notice of, and respond to — even imposing a higher national security standard than what exists at some district levels — criminal court decisions. So the developments affecting location collection in magistrate, district, and circuit courts would be one limit on the government’s ability to collect location under FISA.

That wouldn’t necessarily prevent NSA from collecting cell records using a Section 215 order, at least until the Davis decision. After all, does that count as historic (a daily collection of records each day) or prospective (the approval to collect data going forward in 90 day approvals)? Plus, given the PCTDD and some other later FISA decisions, it’s possible FISC would have permitted the government to collect but minimize location data. But the decisions in criminal courts likely gave FISC pause, especially considering the magnitude of the production.

Then there’s the chaos of the program up to 2009.

At least between January 2008 and March 2009, and to some degree for the entire period preceding the 2009 clean-up of the phone and Internet dragnets, the NSA was applying EO 12333 standards to FISC-authorized metadata collection. In January 2008, NSA co-mingled 215 and EO 12333 data in either a repository or interface, and when the shit started hitting the fan the next year, analysts were instructed to distinguish the two authorities by date (which would have been useless to do). Not long after this data was co-mingled in 2008, FISC first approved IMEI and IMSI as identifiers for use in Section 215 chaining. In other words, any restrictions on cell collection in this period may have been meaningless, because NSA wasn’t heeding FISC’s restrictions on PATRIOT authorized collection, nor could it distinguish between the data it got under EO 12333 and Section 215.

Few people seem to get this point, but at least during 2008, and probably during the entire period leading up to 2009, there was no appreciable analytical border between where the EO 12333 phone dragnet ended and the Section 215 one began.

There’s no unredacted evidence (aside from the IMEI/IMSI permission) the NSA was collecting cell phone records under Section 215 before the 2009 process, though in 2009, both Sprint and Verizon (even AT&T, though to a much less significant level) had to separate out their entirely foreign collection from their domestic, meaning they were turning over data subject to EO 12333 and Section 215 together for years. That’s also roughly the point when NSA moved toward XML coding of data on intake, clearly identifying where and under what authority it obtained the data. Thus, it’s only from that point forward where (at least according to what we know) the data collected under Section 215 would clearly have adhered to any restrictions imposed on location.

In 2010, the NSA first started experimenting with smaller collections of records including location data at a time when Verizon Wireless was named on primary orders. And we have two separate documents describing what NSA considered its first collection of cell data under Section 215 on August 29, 2011. But it did so only after AT&T had stripped the location data from the records.

It appears Verizon never did the same (indeed, Verizon objected to any request to do so in testimony leading up to USAF’s passage). The telecoms used different methods of delivering call records under the program. In fact, in August 2, 2012, NSA’s IG described the orders as requiring telecoms to produce “certain call detail records (CDRs) or telephony metadata,” which may differentiate records that (which may just be AT&T) got processed before turning over. Also in 2009, part of Verizon ended its contract with the FBI to provide special compliance with NSLs. Both things may have affected Verizon’s ability or willingness to custom what it was delivering to NSA, as compared to AT&T.

All of which suggests that at least Verizon could not or chose not to do what AT&T did: strip location data from its call records. Section 215, before USAF, could only require providers to turn over records they kept, it could not require, as USAF may, provision of records under the form required by the government. Additionally, under Section 215, providers did not get compensated after the first two dragnet orders.

All that said, the dragnet has identified cell phones! In fact, the only known “successes” under Section 215 — the discovery of Basaaly Moalin’s T-Mobile cell phone and the discovery of Adis Medunjanin’s unknown, but believed to be Verizon, cell phone — did, and they are cell phones from companies that didn’t turn over records. In addition, there’s another case, cited in a 2009 Robert Mueller declaration preceding the Medunjanin discovery, that found a US-based cell phone.

There are several possible explanations for that. The first is that these phones were identified based off calls from landlines and/or off backbone records (so the phone number would be identified, but not the cell information). But note that, in the Moalin case, there are no known land lines involved in the presumed chain from Ayro to Moalin.

Another possibility — a very real possibility with some of these — is that the underlying records weren’t collected under Section 215 at all, but were instead collected under EO 12333 (though Moalin’s phone was identified before Michael Mukasey signed off on procedures permitting the chaining through US person records). That’s all the more likely given that all the known hits were collected before the point in 2009 when the FISC started requiring providers to separate out foreign (EO 12333) collection from domestic and international (Section 215) collection. In other words, the Section 215 phone dragnet may have been working swimmingly up until 2009 because NSA was breaking the rules, but as soon as it started abiding by the rules — and adhering to FISC’s increasingly strict limits on cell location data — it all of a sudden became virtually useless given the likelihood that potential terrorism targets would use exclusively cell and/or Internet calls just as they came to bypass telephony lines. Though as that happened, the permissions on tracking US persons via records collected under EO 12333, including doing location analysis, grew far more permissive.

In any case, at least in recent years, it’s clear that by giving notice and adjusting policy to match districts, the FISC and FBI made it very difficult to collect prospective location records under FISA, and therefore absent some means of forcing telecoms to strip their records before turning them over, to collect cell data.

Read more

Shorter Devin Nunes: There Are Privacy-Violating Covert Counter-Terrorism Programs We’re Hiding

I want to return to a detail I pointed out in the Intelligence Authorization yesterday: This language, which would affirmatively clarify that the Privacy and Civil Liberties Oversight does not get access to information on covert operations.

ACCESS.—Nothing in this section shall be construed to authorize the Board, or any agent thereof, to gain access to information regarding an activity covered by section 503(a) of the National Security Act of 1947 (50 U.S.C. 3093(a)).

Some or several intelligence agencies are demanding this, presumably, at a time when PCLOB is working on a review of two EO 12333 authorized counterterrorism programs conducted by CIA or NSA that affect US persons.

During the next stage of its inquiry, the Board will select two counterterrorism-related activities governed by E.O. 12333, and will then conduct focused, in-depth examinations of those activities. The Board plans to concentrate on activities of the CIA and NSA, and to select activities that involve one or more of the following: (1) bulk collection involving a significant chance of acquiring U.S. person information; (2) use of incidentally collected U.S. person information; (3) targeting of U.S. persons; and (4) collection that occurs within the United States or from U.S. companies. Both reviews will involve assessing how the need for the activity in question is balanced with the need to protect privacy and civil liberties. The reviews will result in written reports and, if appropriate, recommendations for the enhancement of civil liberties and privacy.

It may be that the IC demanded this out of some generalized fear, of the sort Rachel Brand raised when she objected to PCLOB’s plan to conduct this EO 12333 (though none of what she says addresses the covert nature of any program, but only their classification). Indeed, given that PCLOB planned to finish the review in question by end of year 2015, it is unlikely that the two programs PCLOB pursued were covert operations. Furthermore, there is nothing in Ron Wyden’s statement opposing this language (which I’ve replicated in full below) that seems to indicate the specificity of concern as he had, for example, with location data or secret law or the OLC opinion affecting cybersecurity. Indeed, he specifically says, “this Board’s oversight activities to date have not focused on covert action.”

So there’s nothing in the public record to make me believe PCLOB has already butted up against a covert operation.

That said, I have in recent weeks become increasingly certain there are programs being run under the guise of counterterrorism, off the official books (and/or were, even after Stellar Wind was “shut down”), and probably in ways the affect the privacy of Americans, potentially a great many Americans.

I say that because there are places where the numbers in the public record don’t add up, where official sources are providing obviously bullshit explanations. I say that, too, because it is clear some places where you’d be able to manage such programs (via personnel labeled as “techs,” for example, and therefore not subject to the oversight of the publicly admitted programs) have been affirmatively preserved over the course of years. I say that because certain authorizations were pushed through with far too much urgency given their publicly described roll out over years. I also say that because it’s increasingly clear CIA, at least, views its surveillance mandate to extend to protecting itself, which in this era of inflamed counterintelligence concerns, might (and has in the past for DOD) extend to spying on its perceived enemies (indeed, one of the programs that I think might be such a covert action would be entirely about protecting the CIA).

I have a pretty good sense what at least a few of these programs are doing and where. I don’t know if they are formally covert operations or not — that’s a confusing question given how covert structure has increasingly been used to preserve deniability from US courts rather than foreign countries. But I do know that the IC’s demand that PCLOB be affirmatively disallowed access to such information suggests it knows such programs would not pass the muster of civil liberties review.

In any case, thanks to House Intelligence Chair Devin Nunes for making that so clear.


Wyden’s statement

This afternoon the House of Representatives passed a new version of the Intelligence Authorization bill for fiscal year 2016. I am concerned that section 305 of this bill would undermine independent oversight of US intelligence agencies, and if this language remains in the bill I will oppose any request to pass it by unanimous consent.

Section 305 would limit the authority of the watchdog body known as the Privacy and Civil Liberties Oversight Board. In my judgment, curtailing the authority of an independent oversight body like this Board would be a clearly unwise decision. Most Americans who I talk to want intelligence agencies to work to protect them from foreign threats, and they also want those agencies to be subject to strong, independent oversight. And this provision would undermine some of that oversight.

Section 305 states that the Privacy and Civil Liberties Board shall not have the authority to investigate any covert action program. This is problematic for two reasons. First, while this Board’s oversight activities to date have not focused on covert action, it is reasonably easy to envision a covert action program that could have a significant impact on Americans’ privacy and civil liberties – for example, if it included a significant surveillance component.

An even bigger concern is that the CIA in particular could attempt to take advantage of this language, and could refuse to cooperate with investigations of its surveillance activities by arguing that those activities were somehow connected to a covert action program. I recognize that this may not be the intent of this provision, but in my fifteen years on the Intelligence Committee I have repeatedly seen senior CIA officials go to striking lengths to resist external oversight of their activities. In my judgment Congress should be making it harder, not easier, for intelligence officials to stymie independent oversight.

For these reasons, it is my intention to object to any unanimous consent request to pass this bill in its current form. I look forward to working with my colleagues to modify or remove this provision

Interesting Tidbits from the House Intelligence Authorization

The House version of next year’s Intelligence Authorization just passed with big numbers, 364-58.

Among the interesting details included in the unclassified version of the bill, are the following:

Section 303, 411: Permits the ICIG and the CIA IG to obtain information from state and local governments

The bill changes language permitting the Intelligence Community Inspector General and the CIA IG to obtain information from any federal agency to obtain it from federal, state, or local governments.

Which sort of suggests the ICIG and CIA IG is reviewing — and therefore the IC is sharing information with — state and local governments.

I have no big problem with this for ICIG. But doesn’t this suggest the CIA — a foreign intelligence agency — is doing things at the state level? That I do have a problem with.

Update: Note No One Special’s plausible explanation: that the IGs would be investigating misconduct like DWIs. That makes sense, especially given the heightened focus on Insider Threat Detection.

Section 305: Tells PCLOB to stay the fuck out of covert operations

This adds language to the Privacy and Civil Liberties Oversight Board authorization stating that, “Nothing in [it] shall be construed to authorize the Board, or any agent thereof, to gain access to information regarding an activity covered by” the covert operation section of the National Security Act.

OK then! I guess Congress has put PCLOB in its place!

Remember, PCLOB currently has a mandate that extends only to counterterrorism (though it will probably expand to cyber once the CISA-type bill is passed). It is currently investigating a couple of EO 12333 authorized activities that take place in some loopholed areas of concern. I’m guessing it bumped up against something Congress doesn’t want it to know about, and they’ve gone to the trouble of making that clear in the Intelligence Authorization.

As it happens, Ron Wyden is none too impressed with this section and has threatened to object to unanimous consent of the bill in the Senate over it. Here are his concerns.

Section 305 would limit the authority of the watchdog body known as the Privacy and Civil Liberties Oversight Board.  In my judgment, curtailing the authority of an independent oversight body like this Board would be a clearly unwise decision.  Most Americans who I talk to want intelligence agencies to work to protect them from foreign threats, and they also want those agencies to be subject to strong, independent oversight.  And this provision would undermine some of that oversight.

Section 305 states that the Privacy and Civil Liberties Board shall not have the authority to investigate any covert action program.  This is problematic for two reasons.  First, while this Board’s oversight activities to date have not focused on covert action, it is reasonably easy to envision a covert action program that could have a significant impact on Americans’ privacy and civil liberties – for example, if it included a significant surveillance component.

An even bigger concern is that the CIA in particular could attempt to take advantage of this language, and could refuse to cooperate with investigations of its surveillance activities by arguing that those activities were somehow connected to a covert action program.  I recognize that this may not be the intent of this provision, but in my fifteen years on the Intelligence Committee I have repeatedly seen senior CIA officials go to striking lengths to resist external oversight of their activities.  In my judgment Congress should be making it harder, not easier, for intelligence officials to stymie independent oversight.

Section 306: Requires ODNI to check for spooks sporting EFF stickers

The committee description of this section explains it will require DNI to do more checks on spooks (actually spooks and “sensitive” positions, which isn’t full clearance).

Section 306 directs the Director of National Intelligence (DNI) to develop and implement a plan for eliminating the backlog of overdue periodic investigations, and further requires the DNI to direct each agency to implement a program to provide enhanced security review to individuals determined eligible for access to classified information or eligible to hold a sensitive position.

These enhanced personnel security programs will integrate information relevant and appropriate for determining an individual’s suitability for access to classified information; be conducted at least 2 times every 5 years; and commence not later than 5 years after the date of enactment of the Fiscal Year 2016 Intelligence Authorization Act, or the elimination of the backlog of overdue periodic investigations, whichever occurs first.

Among the things ODNI will use to investigate its spooks are social media, commercial data sources, and credit reports. Among the things it is supposed to track is “change in ideology.” I’m guessing they’ll do special checks for EFF stickers and hoodies, which Snowden is known to have worn without much notice from NSA.

Section 307: Requires DNI to report if telecoms aren’t hoarding your call records

This adds language doing what some versions of USA Freedom tried to requiring DNI to report on which “electronic communications service providers” aren’t hoarding your call records for at least 18 months. He will have to do a report after 30 days listing all that don’t (bizarrely, the bill doesn’t specify what size company this covers, which given the extent of ECSPs in this country could be daunting), and also report to Congress within 15 days if any of them stop hoarding your records.

Section 313: Requires NIST to develop a measure of cyberdamage

For years, Keith Alexander has been permitted to run around claiming that cyber attacks have represented the greatest transfer of wealth ever (apparently he hasn’t heard of slavery or colonialism). This bill would require NIST to work with FBI and others to come up with a way to quantify the damage from cyberattacks.

Section 401: Requires congressional confirmation of the National Counterintelligence Executive

The National Counterintelligence Executive was pretty negligent in scoping out places like the OPM database that might be prime targets for China. I’m hoping that by requiring congressional appointment, this position becomes more accountable and potentially more independent.

Section 701: Eliminates reporting that probably shouldn’t be eliminated

James Clapper hates reporting requirements, and with this bill he’d get rid of some more of them, some of which are innocuous.

But I am concerned that the bill would eliminate this report on what outside entities spooks are also working for.

(2) The Director of National Intelligence shall annually submit to the congressional intelligence committees a report describing all outside employment for officers and employees of elements of the intelligence community that was authorized by the head of an element of the intelligence community during the preceding calendar year. Such report shall be submitted each year on the date provided in section 3106 of this title.

We’ve just seen several conflict situations at NSA, and eliminating this report would make it less like to ID those conflicts.

The bill would also eliminate these reports.

REPORTS ON NUCLEAR ASPIRATIONS OF NON-STATE ENTITIES.—Section 1055 of the National Defense Authorization Act for Fiscal Year 2010 (50 U.S.C. 2371) is repealed.

REPORTS ON ESPIONAGE BY PEOPLE’S REPUBLIC OF CHINA.—Section 3151 of the National Defense Authorization Act for Fiscal Year 2000 (42 U.S.C. 7383e) is repealed.

Given that both of these issues are of grave concern right now, I do wonder why Clapper doesn’t want to report to Congress on them.

And, then there’s the elimination of this report.

§2659. Report on security vulnerabilities of national security laboratory computers

(a) Report required

Not later than March 1 of each year, the National Counterintelligence Policy Board shall prepare a report on the security vulnerabilities of the computers of the national security laboratories.

(b) Preparation of report

In preparing the report, the National Counterintelligence Policy Board shall establish a so-called “red team” of individuals to perform an operational evaluation of the security vulnerabilities of the computers of one or more national security laboratories, including by direct experimentation. Such individuals shall be selected by the National Counterintelligence Policy Board from among employees of the Department of Defense, the National Security Agency, the Central Intelligence Agency, the Federal Bureau of Investigation, and of other agencies, and may be detailed to the National Counterintelligence Policy Board from such agencies without reimbursement and without interruption or loss of civil service status or privilege.

Clapper’s been gunning to get rid of this one for at least 3 years, with the hysteria about hacking growing in each of those years. Department of Energy, as a whole, at least, is a weak spot in cybersecurity. Nevertheless, Congress is going to eliminate reporting on this.

Maybe the hacking threat isn’t as bad as Clapper says?

IRS’ Stingray Tracked 44 Cell Devices Over 4 Years But the Agency Needs Another

Back in 2010, Daniel Rigmaiden forced the government to reveal it had used a Stingray to bust him for tax fraud in 2008. Apparently, even in spite of blowing their prosecution of Rigmaiden, the IRS liked what it did, because in 2011, they bought their own Stingray, as John Koskinen revealed in a response to a Ron Wyden question on the topic.

Koskinen reveals the IRS used its Stingray between 2011 or 2012 and the present in this way:

  • On 11 of its own grand jury investigations, largely focused on stolen ID refund fraud, in which it tracked 37 devices total.
  • On 1 DEA case, in which it tracked 1 device.
  • On 3 state murder and similar cases, in which it tracked 6 devices total.

In other words, over the course of its almost 4 year life, the Stingray has tracked just 44 devices.

That seems to suggest this tracking isn’t just a quick one-off, otherwise they wouldn’t need another device, as they’re currently in the process of getting.

Perhaps however, this is a testament to the obsolescence of these devices. In his response to Wyden, Koskinen doesn’t mention the Stingray IRS bought in 2009, suggesting it may not be in use anymore.

The government is sure blowing through these expensive surveillance toys in quick succession.

Update: My apologies to Rigmaiden for getting his first name wrong and thanks to Chris Soghoian for spotting it.

How the Government Uses Location Data from Mobile Apps

Screen shot 2015-11-19 at 9.24.26 AMThe other day I looked at an exchange between Ron Wyden and Jim Comey that took place in January 2014, as well as the response FBI gave Wyden afterwards. I want to return to the reason I was originally interested in the exchange: because it reveals that FBI, in addition to obtaining cell location data directly from a phone company or a Stingray, will sometimes get location data from a mobile app provider.

I asked Magistrate Judge Stephen Smith from Houston whether he had seen any such requests — he’s one of a group of magistrates who have pushed for more transparency on these issues. He explained he had had several hybrid pen/trap/2703(d) requests for location and other data targeting WhatsApp accounts. And he had one fugitive probation violation case where the government asked for the location data of those in contact with the fugitive’s Snapchat account, based on the logic that he might be hiding out with one of the people who had interacted with him on Snapchat. The providers would basically be asked to to turn over the cell site location information they had obtained from the users’ phone along with other metadata about those interactions. To be clear, this is not location data the app provider generates, it would be the location data the phone company generates, which the app accesses in the normal course of operation.

The point of getting location data like this is not to evade standards for a particular jurisdiction on CSLI. Smith explained, “The FBI apparently considers CSLI from smart phone apps the same as CSLI from the phone companies, so the same legal authorities apply to both, the only difference being that the ‘target device’ identifier is a WhatsApp/Snapchat account number instead of a phone number.” So in jurisdictions where you can get location data with an order, that’s what it takes, in jurisdictions where you need a probable cause warrant, that’s what it will take. The map above, which ACLU makes a great effort to keep up to date here, shows how jurisdictions differ on the standards for retrospective and prospective location information, which is what (as far as we know) will dictate what it would take to get, say, CSLI data tied to WhatsApp interactions.

Rather than serving as a way to get around legal standards, the reason to get CSLI from the app provider rather than the phone company that originally produces it is to get location data from both sides of a conversation, rather than just the target phone. That is, the app provides valuable context to the location data that you wouldn’t get just from the target’s cell location data.

The fact that the government is getting location data from mobile app providers — and the fact that they comply with the same standard for CSLI obtained from phones in any given jurisdiction — may help to explain a puzzle some have been pondering for the last week or so: why Facebook’s transparency report shows a big spike in wiretap warrants last year.

[T]he latest government requests report from Facebook revealed an unexpected and dramatic rise in real-time interceptions, or wiretaps. In the first six months of 2015, US law enforcement agencies sent Facebook 201 wiretap requests (referred to as “Title III” in the report) for 279 users or accounts. In all of 2014, on the other hand, Facebook only received 9 requests for 16 users or accounts.

Based on my understanding of what is required, this access of location data via WhatsApp should appear in several different categories of Facebook’s transparency report, including 2703(d), trap and trace, emergency request, and search warrant. That may include wiretap warrants, because this is, after all, prospective interception, and not just of the target, but also of the people with whom the target communicates. That may be why Facebook told Motherboard “we are not able to speculate about the types of legal process law enforcement chooses to serve,” because it really would vary from jurisdiction to jurisdiction and possibly even judge to judge.

In any case, we can be sure such requests are happening both on the criminal and the intelligence side, and perhaps most productively under PRISM (which could capture foreign to domestic communications at a much lower standard of review). Which, again, is why any legislation covering location data should cover the act of obtaining location data, whether via the phone company, a Stingray, or a mobile app provider.

It’s Harder for FBI to Get Location Data from Phone Companies Under FISA than Other Ways

I was looking for something else on Ron Wyden’s website yesterday and noticed this exchange between Wyden and Jim Comey from January 29, 2014 (see my transcription below). At first it seemed to be another of Wyden’s persistent questions about how the government collects location data — which we generally assume to be via telephone provider or Stingray — but then realized he was asking something somewhat different. After asking about Cell Site Location Information from phone companies, Wyden then asked whether the FBI uses the same (order, presumably a Pen Register) standard when collecting location from a smart phone app.

Oh yeah! The government can collect location information via apps (and thereby from Google or WhatsApp other providers) as well.

Here’s the FBI’s response, which hasn’t been published before.

The response is interesting for several reasons, some of which may explain why the government hasn’t been getting all the information from cell phones that it wanted under the Section 215 phone dragnet.

First, when the FBI is getting prospective CSLI, it gets a full FISA order, based on a showing of probable cause (it can get historical data using just an order). The response to Wyden notes that while some jurisdictions permit obtaining location data with just an order, because others require warrants, “the FBI elects to seek prospective CSLI pursuant to a full content FISA order, thus matching the higher standard imposed in some U.S. districts.”

Some of this FISA discussed in 2006 in response to some magistrates’ rulings that you needed more than an order to get location, though there are obviously more recent precedents that are stricter about needing a warrant.

This means it is actually harder right now to get prospective CSLI under FISA than it is under Title III in some states. (The letter also notes sometimes the FBI “will use criminal legal authorities in national security investigations,” which probably means FBI will do so in those states with a lower standard).

The FBI’s answer about smart phone apps was far squirrelier. It did say that when obtaining information from the phone itself, it gets a full-content FISA order, absent any exception to the Fourth Amendment (such as the border exception, which is one of many reasons FBI loves to search phones at the border and therefore hates Apple’s encryption); note this March 6, 2014 response was before the June 24, 2014 Riley v. CA decision that required a warrant to search a cell phone, which says FISA was on a higher standard there, too, until SCOTUS caught up.

But as to getting information from smartphone apps itself, here’s what FBI answered.

Which legal authority we would use is very much dependent upon the type of information we are seeking and how we intend to obtain that information. Questions considered include whether or not the information sought would target an individual in an area in which that person has a reasonable expectation of privacy, what type of data we intend to obtain (GPS or other similarly precise location information), and how we intend to obtain the data (via a request for records from the service provider or from the mobile device itself).

In other words, after having thought about how to answer Wyden for five weeks rather than the one they had promised, they didn’t entirely answer the question, which was what it would take for the FBI to get information from apps, rather than cell phone providers, though I think that may be the same standard as a CSLI from a cell phone company.

But this seems to say that, in the FISA context, it may well be easier — and a lower standard of evidence — for the FBI to get location data from a Stingray.

This explains why Wyden’s location bill — which he was pushing just the other day, after the Supreme Court refused to take Quartavious Davis’ appeal — talks about location collection generally, rather than using (for example) a Stingray.


Wyden: I’d like to ask you about the government’s authority to track individuals using things like cell site location information and smart phone applications. Last fall the NSA Director testified that “we–the NSA–identify a number we can give that to the FBI. When they get their probable cause then they can get the locational information they need.”

I’ve been asking the NSA to publicly clarify these remarks but it hasn’t happened yet. So, is the FBI required to have probable cause in order to acquire Americans’ cell site location information for intelligence purposes?

Comey: I don’t believe so Senator. We — in almost all circumstances — we have to obtain a court order but the showing is “a reasonable basis to believe it’s relevant to the investigation.”

Wyden: So, you don’t have to show probable cause. You have cited another standard. Is that standard different if the government is collecting the location information from a smart phone app rather than a cell phone tower?

Comey: I don’t think I know, I probably ought to ask someone who’s a little smarter what the standard is that governs those. I don’t know the answer sitting here.

Wyden: My time is up. Can I have an answer to that within a week?

Comey: You sure can.

CISA Overwhelmingly Passes, 74-21

Update: Thought I’d put a list of Senators people should thank for voting against CISA.

GOP: Crapo, Daines, Heller, Lee, Risch, and Sullivan. (Paul voted against cloture but did not vote today.)

Dems: Baldwin, Booker, Brown, Cardin, Coons, Franken, Leahy, Markey, Menendez, Merkley, Sanders, Tester, Udall, Warren, Wyden


Just now, the Senate voted to pass the Cyber Information Sharing Act by a vote of 74 to 21. While 7 more people voted against the bill than had voted against cloture last week (Update: the new votes were Cardin and Tester, Crapo, Daines, Heller, Lee, Risch, and Sullivan, with Paul not voting), this is still a resounding vote for a bill that will authorize domestic spying with no court review in this country.

The amendment voting process was interesting of its own accord. Most appallingly, just after Patrick Leahy cast his 15,000th vote on another amendment — which led to a break to talk about what a wonderful person he is, as well as a speech from him about how the Senate is the conscience of the country — Leahy’s colleagues voted 57 to 39 against his amendment that would have stopped the creation of a new FOIA exemption for CISA. So right after honoring Leahy, his colleagues kicked one of his key issues, FOIA, in the ass.

More telling, though, were the votes on the Wyden and Heller amendments, the first two that came up today.

Wyden’s amendment would have required more stringent scrubbing of personal data before sharing it with the federal government. The amendment failed by a vote of 55-41 — still a big margin, but enough to sustain a filibuster. Particularly given that Harry Reid switched votes at the last minute, I believe that vote was designed to show enough support for a better bill to strengthen the hand of those pushing for that in conference (the House bills are better on this point). The amendment had the support of a number of Republicans — Crapo, Daines, Gardner, Heller, Lee, Murkowksi, and Sullivan — some of whom would vote against passage. Most of the Democrats who voted against Wyden’s amendment — Carper, Feinstein, Heitkamp, Kaine, King, Manchin, McCaskill, Mikulski, Nelson, Warner, Whitehouse — consistently voted against any amendment that would improve the bill (and Whitehouse even voted for Tom Cotton’s bad amendment).

The vote on Heller’s amendment looked almost nothing like Wyden’s. Sure, the amendment would have changed just two words in the bill, requiring the government to have a higher standard for information it shared internally. But it got a very different crowd supporting it, with a range of authoritarian Republicans like Barrasso, Cassidy, Enzi, Ernst, and Hoeven — voting in favor. That made the vote on the bill much closer. So Reid, along with at least 7 other Democrats who voted for Wyden’s amendment, including Brown, Klobuchar, Murphy, Schatz, Schumer, Shaheen, and Stabenow, voted against Heller’s weaker amendment. While some of these Democrats — Klobuchar, Schumer, and probably Shaheen and Stabenow — are affirmatively pro-unconstitutional spying anyway, the swing, especially from Sherrod Brown, who voted against the bill as a whole, makes it clear that these are opportunistic votes to achieve an outcome. Heller’s vote fell just short 49-47, and would have passed had some of those Dems voted in favor (the GOP Presidential candidates were not present, but that probably would have been at best a wash and possibly a one vote net against, since Cruz voted for cloture last week). Ultimately, I think Reid and these other Dems are moving to try to deliver something closer to what the White House wants, which is still unconstitutional domestic spying.

Richard Burr seemed certain that this will go to conference, which means people like he, DiFi, and Tom Carper will try to make this worse as people from the House point out that there are far more people who oppose this kind of unfettered spying in the House. We shall see.

For now, however, the Senate has embraced a truly awful bill.

Update, all amendment roll calls

Wyden: 41-55-4

Heller: 47-49-4

Leahy: 37-59-4

Franken: 35-60-5

Coons: 41-54-5

Cotton amendment: 22-73-5

Final passage: 74-21-5

Richard Burr Wants to Prevent Congress from Learning if CISA Is a Domestic Spying Bill

As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.

So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.

The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.

Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.

Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.

Prevent Congress from learning how often CISA data is used for law enforcement

Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.

In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.

Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.

Prevent Congress from learning how often CISA sharing falls short of the current NSA minimization standard

Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).

But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.

Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.

So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.

Prevent Congress from learning how well the privacy strips — at both private sector and government — are working

Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.

Burr and DiFi stripped that entirely.

Prevent Congress from learning how often “defensive measures” cause damage

Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.

Burr and DiFi stripped that really critical measure.

Prevent Congress from learning whether companies are bypassing the preferred sharing method

Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.

Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.

That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.

Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).

But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.

Read more

Sheldon Whitehouse’s Horrible CFAA Amendment Gets Pulled — But Will Be Back in Conference

As I noted yesterday, Ron Wyden objected to unanimous consent on CISA yesterday because Sheldon Whitehouse’s crappy amendment, which makes the horrible CFAA worse, was going to get a vote. Yesterday, it got amended, but as CDT analyzed, it remains problematic and overbroad.

This afternoon, Whitehouse took to the Senate floor to complain mightily that his amendment had been pulled — presumably it was pulled to get Wyden to withdraw his objections. Whitehouse complained as if this were the first time amendments had not gotten a vote, though that happens all the time with amendments that support civil liberties. He raged about the Masters of the Universe who had pulled his amendment, and suggested a pro-botnet conference had forced the amendment to be pulled, rather than people who have very sound reasons to believe the amendment was badly drafted and dangerously expanded DOJ’s authority.

For all Whitehouse’s complaining, though, it’s likely the amendment is not dead. Tom Carper, who as Ranking Member of the Senate Homeland Security Committee would almost certainly be included in any conference on the bill, rose just after Whitehouse. He said if the provision ends up in the bill, “we will conference, I’m sure, with the House and we will have an opportunity to revisit this, so I just hope you’ll stay in touch with those of us who might be fortunate enough to be a conferee.”