Ron Wyden

1 2 3 20

How the Government Uses Location Data from Mobile Apps

Screen shot 2015-11-19 at 9.24.26 AMThe other day I looked at an exchange between Ron Wyden and Jim Comey that took place in January 2014, as well as the response FBI gave Wyden afterwards. I want to return to the reason I was originally interested in the exchange: because it reveals that FBI, in addition to obtaining cell location data directly from a phone company or a Stingray, will sometimes get location data from a mobile app provider.

I asked Magistrate Judge Stephen Smith from Houston whether he had seen any such requests — he’s one of a group of magistrates who have pushed for more transparency on these issues. He explained he had had several hybrid pen/trap/2703(d) requests for location and other data targeting WhatsApp accounts. And he had one fugitive probation violation case where the government asked for the location data of those in contact with the fugitive’s Snapchat account, based on the logic that he might be hiding out with one of the people who had interacted with him on Snapchat. The providers would basically be asked to to turn over the cell site location information they had obtained from the users’ phone along with other metadata about those interactions. To be clear, this is not location data the app provider generates, it would be the location data the phone company generates, which the app accesses in the normal course of operation.

The point of getting location data like this is not to evade standards for a particular jurisdiction on CSLI. Smith explained, “The FBI apparently considers CSLI from smart phone apps the same as CSLI from the phone companies, so the same legal authorities apply to both, the only difference being that the ‘target device’ identifier is a WhatsApp/Snapchat account number instead of a phone number.” So in jurisdictions where you can get location data with an order, that’s what it takes, in jurisdictions where you need a probable cause warrant, that’s what it will take. The map above, which ACLU makes a great effort to keep up to date here, shows how jurisdictions differ on the standards for retrospective and prospective location information, which is what (as far as we know) will dictate what it would take to get, say, CSLI data tied to WhatsApp interactions.

Rather than serving as a way to get around legal standards, the reason to get CSLI from the app provider rather than the phone company that originally produces it is to get location data from both sides of a conversation, rather than just the target phone. That is, the app provides valuable context to the location data that you wouldn’t get just from the target’s cell location data.

The fact that the government is getting location data from mobile app providers — and the fact that they comply with the same standard for CSLI obtained from phones in any given jurisdiction — may help to explain a puzzle some have been pondering for the last week or so: why Facebook’s transparency report shows a big spike in wiretap warrants last year.

[T]he latest government requests report from Facebook revealed an unexpected and dramatic rise in real-time interceptions, or wiretaps. In the first six months of 2015, US law enforcement agencies sent Facebook 201 wiretap requests (referred to as “Title III” in the report) for 279 users or accounts. In all of 2014, on the other hand, Facebook only received 9 requests for 16 users or accounts.

Based on my understanding of what is required, this access of location data via WhatsApp should appear in several different categories of Facebook’s transparency report, including 2703(d), trap and trace, emergency request, and search warrant. That may include wiretap warrants, because this is, after all, prospective interception, and not just of the target, but also of the people with whom the target communicates. That may be why Facebook told Motherboard “we are not able to speculate about the types of legal process law enforcement chooses to serve,” because it really would vary from jurisdiction to jurisdiction and possibly even judge to judge.

In any case, we can be sure such requests are happening both on the criminal and the intelligence side, and perhaps most productively under PRISM (which could capture foreign to domestic communications at a much lower standard of review). Which, again, is why any legislation covering location data should cover the act of obtaining location data, whether via the phone company, a Stingray, or a mobile app provider.

It’s Harder for FBI to Get Location Data from Phone Companies Under FISA than Other Ways

I was looking for something else on Ron Wyden’s website yesterday and noticed this exchange between Wyden and Jim Comey from January 29, 2014 (see my transcription below). At first it seemed to be another of Wyden’s persistent questions about how the government collects location data — which we generally assume to be via telephone provider or Stingray — but then realized he was asking something somewhat different. After asking about Cell Site Location Information from phone companies, Wyden then asked whether the FBI uses the same (order, presumably a Pen Register) standard when collecting location from a smart phone app.

Oh yeah! The government can collect location information via apps (and thereby from Google or WhatsApp other providers) as well.

Here’s the FBI’s response, which hasn’t been published before.

The response is interesting for several reasons, some of which may explain why the government hasn’t been getting all the information from cell phones that it wanted under the Section 215 phone dragnet.

First, when the FBI is getting prospective CSLI, it gets a full FISA order, based on a showing of probable cause (it can get historical data using just an order). The response to Wyden notes that while some jurisdictions permit obtaining location data with just an order, because others require warrants, “the FBI elects to seek prospective CSLI pursuant to a full content FISA order, thus matching the higher standard imposed in some U.S. districts.”

Some of this FISA discussed in 2006 in response to some magistrates’ rulings that you needed more than an order to get location, though there are obviously more recent precedents that are stricter about needing a warrant.

This means it is actually harder right now to get prospective CSLI under FISA than it is under Title III in some states. (The letter also notes sometimes the FBI “will use criminal legal authorities in national security investigations,” which probably means FBI will do so in those states with a lower standard).

The FBI’s answer about smart phone apps was far squirrelier. It did say that when obtaining information from the phone itself, it gets a full-content FISA order, absent any exception to the Fourth Amendment (such as the border exception, which is one of many reasons FBI loves to search phones at the border and therefore hates Apple’s encryption); note this March 6, 2014 response was before the June 24, 2014 Riley v. CA decision that required a warrant to search a cell phone, which says FISA was on a higher standard there, too, until SCOTUS caught up.

But as to getting information from smartphone apps itself, here’s what FBI answered.

Which legal authority we would use is very much dependent upon the type of information we are seeking and how we intend to obtain that information. Questions considered include whether or not the information sought would target an individual in an area in which that person has a reasonable expectation of privacy, what type of data we intend to obtain (GPS or other similarly precise location information), and how we intend to obtain the data (via a request for records from the service provider or from the mobile device itself).

In other words, after having thought about how to answer Wyden for five weeks rather than the one they had promised, they didn’t entirely answer the question, which was what it would take for the FBI to get information from apps, rather than cell phone providers, though I think that may be the same standard as a CSLI from a cell phone company.

But this seems to say that, in the FISA context, it may well be easier — and a lower standard of evidence — for the FBI to get location data from a Stingray.

This explains why Wyden’s location bill — which he was pushing just the other day, after the Supreme Court refused to take Quartavious Davis’ appeal — talks about location collection generally, rather than using (for example) a Stingray.

Wyden: I’d like to ask you about the government’s authority to track individuals using things like cell site location information and smart phone applications. Last fall the NSA Director testified that “we–the NSA–identify a number we can give that to the FBI. When they get their probable cause then they can get the locational information they need.”

I’ve been asking the NSA to publicly clarify these remarks but it hasn’t happened yet. So, is the FBI required to have probable cause in order to acquire Americans’ cell site location information for intelligence purposes?

Comey: I don’t believe so Senator. We — in almost all circumstances — we have to obtain a court order but the showing is “a reasonable basis to believe it’s relevant to the investigation.”

Wyden: So, you don’t have to show probable cause. You have cited another standard. Is that standard different if the government is collecting the location information from a smart phone app rather than a cell phone tower?

Comey: I don’t think I know, I probably ought to ask someone who’s a little smarter what the standard is that governs those. I don’t know the answer sitting here.

Wyden: My time is up. Can I have an answer to that within a week?

Comey: You sure can.

CISA Overwhelmingly Passes, 74-21

Update: Thought I’d put a list of Senators people should thank for voting against CISA.

GOP: Crapo, Daines, Heller, Lee, Risch, and Sullivan. (Paul voted against cloture but did not vote today.)

Dems: Baldwin, Booker, Brown, Cardin, Coons, Franken, Leahy, Markey, Menendez, Merkley, Sanders, Tester, Udall, Warren, Wyden

Just now, the Senate voted to pass the Cyber Information Sharing Act by a vote of 74 to 21. While 7 more people voted against the bill than had voted against cloture last week (Update: the new votes were Cardin and Tester, Crapo, Daines, Heller, Lee, Risch, and Sullivan, with Paul not voting), this is still a resounding vote for a bill that will authorize domestic spying with no court review in this country.

The amendment voting process was interesting of its own accord. Most appallingly, just after Patrick Leahy cast his 15,000th vote on another amendment — which led to a break to talk about what a wonderful person he is, as well as a speech from him about how the Senate is the conscience of the country — Leahy’s colleagues voted 57 to 39 against his amendment that would have stopped the creation of a new FOIA exemption for CISA. So right after honoring Leahy, his colleagues kicked one of his key issues, FOIA, in the ass.

More telling, though, were the votes on the Wyden and Heller amendments, the first two that came up today.

Wyden’s amendment would have required more stringent scrubbing of personal data before sharing it with the federal government. The amendment failed by a vote of 55-41 — still a big margin, but enough to sustain a filibuster. Particularly given that Harry Reid switched votes at the last minute, I believe that vote was designed to show enough support for a better bill to strengthen the hand of those pushing for that in conference (the House bills are better on this point). The amendment had the support of a number of Republicans — Crapo, Daines, Gardner, Heller, Lee, Murkowksi, and Sullivan — some of whom would vote against passage. Most of the Democrats who voted against Wyden’s amendment — Carper, Feinstein, Heitkamp, Kaine, King, Manchin, McCaskill, Mikulski, Nelson, Warner, Whitehouse — consistently voted against any amendment that would improve the bill (and Whitehouse even voted for Tom Cotton’s bad amendment).

The vote on Heller’s amendment looked almost nothing like Wyden’s. Sure, the amendment would have changed just two words in the bill, requiring the government to have a higher standard for information it shared internally. But it got a very different crowd supporting it, with a range of authoritarian Republicans like Barrasso, Cassidy, Enzi, Ernst, and Hoeven — voting in favor. That made the vote on the bill much closer. So Reid, along with at least 7 other Democrats who voted for Wyden’s amendment, including Brown, Klobuchar, Murphy, Schatz, Schumer, Shaheen, and Stabenow, voted against Heller’s weaker amendment. While some of these Democrats — Klobuchar, Schumer, and probably Shaheen and Stabenow — are affirmatively pro-unconstitutional spying anyway, the swing, especially from Sherrod Brown, who voted against the bill as a whole, makes it clear that these are opportunistic votes to achieve an outcome. Heller’s vote fell just short 49-47, and would have passed had some of those Dems voted in favor (the GOP Presidential candidates were not present, but that probably would have been at best a wash and possibly a one vote net against, since Cruz voted for cloture last week). Ultimately, I think Reid and these other Dems are moving to try to deliver something closer to what the White House wants, which is still unconstitutional domestic spying.

Richard Burr seemed certain that this will go to conference, which means people like he, DiFi, and Tom Carper will try to make this worse as people from the House point out that there are far more people who oppose this kind of unfettered spying in the House. We shall see.

For now, however, the Senate has embraced a truly awful bill.

Update, all amendment roll calls

Wyden: 41-55-4

Heller: 47-49-4

Leahy: 37-59-4

Franken: 35-60-5

Coons: 41-54-5

Cotton amendment: 22-73-5

Final passage: 74-21-5

Richard Burr Wants to Prevent Congress from Learning if CISA Is a Domestic Spying Bill

As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.

So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.

The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.

Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.

Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.

Prevent Congress from learning how often CISA data is used for law enforcement

Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.

In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.

Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.

Prevent Congress from learning how often CISA sharing falls short of the current NSA minimization standard

Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).

But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.

Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.

So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.

Prevent Congress from learning how well the privacy strips — at both private sector and government — are working

Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.

Burr and DiFi stripped that entirely.

Prevent Congress from learning how often “defensive measures” cause damage

Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.

Burr and DiFi stripped that really critical measure.

Prevent Congress from learning whether companies are bypassing the preferred sharing method

Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.

Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.

That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.

Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).

But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.

Continue reading

Sheldon Whitehouse’s Horrible CFAA Amendment Gets Pulled — But Will Be Back in Conference

As I noted yesterday, Ron Wyden objected to unanimous consent on CISA yesterday because Sheldon Whitehouse’s crappy amendment, which makes the horrible CFAA worse, was going to get a vote. Yesterday, it got amended, but as CDT analyzed, it remains problematic and overbroad.

This afternoon, Whitehouse took to the Senate floor to complain mightily that his amendment had been pulled — presumably it was pulled to get Wyden to withdraw his objections. Whitehouse complained as if this were the first time amendments had not gotten a vote, though that happens all the time with amendments that support civil liberties. He raged about the Masters of the Universe who had pulled his amendment, and suggested a pro-botnet conference had forced the amendment to be pulled, rather than people who have very sound reasons to believe the amendment was badly drafted and dangerously expanded DOJ’s authority.

For all Whitehouse’s complaining, though, it’s likely the amendment is not dead. Tom Carper, who as Ranking Member of the Senate Homeland Security Committee would almost certainly be included in any conference on the bill, rose just after Whitehouse. He said if the provision ends up in the bill, “we will conference, I’m sure, with the House and we will have an opportunity to revisit this, so I just hope you’ll stay in touch with those of us who might be fortunate enough to be a conferee.”

CISA Moves: A Summary

This afternoon, Aaron Richard Burr moved the Cyber Intelligence Sharing Act forward by introducing a manager’s amendment that has limited privacy tweaks (permitting a scrub at DHS and limiting the use of CISA information to cyber crimes that nevertheless include to prevent threat to property), with a bunch of bigger privacy fix amendments, plus a Tom Cotton one and a horrible Sheldon Whitehouse one called as non-germane amendments requiring 60 votes.

Other than that, Burr, Dianne Feinstein, and Ron Wyden spoke on the bill.

Burr did some significant goalpost moving. Whereas in the past, he had suggested that CISA might have prevented the Office of Public Management hack, today he suggested CISA would limit how much data got stolen in a series of hacks. His claim is still false (in almost all the hacks he discussed, the attack vector was already known, but knowing it did nothing to prevent the continued hack).

Burr also likened this bill to a neighborhood watch, where everyone in the neighborhood looks out for the entire neighborhood. He neglected to mention that that neighborhood watch would also include that nosy granny type who reports every brown person in the neighborhood, and features self-defense just like George Zimmerman’s neighborhood watch concept does. Worse, Burr suggested that those not participating in his neighborhood watch were had no protection, effectively suggesting that some of the best companies on securing themselves — like Google — were not protecting customers. Burr even suggested he didn’t know anything about the companies that oppose the bill, which is funny, because Twitter opposes the bill, and Burr has a Twitter account.

Feinstein was worse. She mentioned the OPM hack and then really suggested that a series of other hacks — including both the Sony hack and the DDOS attacks on online banking sites that stole no data! — were worse than the OPM hack.

Yes, the Vice Chair of SSCI really did say that the OPM hack was less serious than a bunch of other other hacks that didn’t affect the national security of this country. Which, if I were one of the 21 million people whose security clearance data had been compromised, would make me very very furious.

DiFi also used language that made it clear she doesn’t really understand how the information sharing portal works. She said something like, “Once cyber information enters the portal it will move at machine speed to other federal agencies,” as if a conveyor belt will carry information from DHS to FBI.

Wyden mostly pointed out that this bill doesn’t protect privacy. But he did call out Burr on his goalpost moving on whether the bill would prevent (his old claim) or just limit the damage 0f (his new one) attacks that it wouldn’t affect at all.

Wyden did, however, object to unanimous consent because Whitehouse’s crappy amendment was being given a vote, which led Burr to complain that Wyden wasn’t going to hold this up.

Finally, Burr came back on the floor, not only to bad mouth companies that oppose this bill again (and insist it was voluntary so they shouldn’t care) but also to do what I thought even he wouldn’t do: suggest we need to pass CISA because a 13 year old stoner hacked the CIA Director.

BREAKING: OPM and DOD (Claim They) Don’t Think Fingerprint Databases Are All That Useful

In the most negative news dump released behind the cover of Pope Francis’ skirts, Office of Public Management just announced that rather than previous reports that 1.1 million people had had their fingerprints stolen from OPM’s databases, instead 5.6 million have.

Aside from the big numbers involved, there are several interesting aspects of this announcement.

First, it seems OPM had an archive of records on 4.5 million people, including fingerprint data, they hadn’t realized was there at first.

As part of the government’s ongoing work to notify individuals affected by the theft of background investigation records, the Office of Personnel Management and the Department of Defense have been analyzing impacted data to verify its quality and completeness. During that process, OPM and DoD identified archived records containing additional fingerprint data not previously analyzed.

If, as it appears, this means OPM had databases of key counterintelligence lying around it wasn’t aware of (and therefore wasn’t using), it suggests Ron Wyden’s concern that the government is retaining data unnecessarily is absolutely correct.

Rather bizarrely, upon learning that someone found and went through archived databases to obtain more fingerprint data, “federal experts” claim that “as of now, the ability to misuse fingerprint data is limited.”

As EFF just revealed, since February the FBI has been busy adding fingerprint data it gets when it does when it does background checks on job applicants into its Next Generation Identification database.

Being a job seeker isn’t a crime. But the FBI has made a big change in how it deals with fingerprints that might make it seem that way. For the first time, fingerprints and biographical information sent to the FBI for a background check will be stored and searched right along with fingerprints taken for criminal purposes.

The change, which the FBI revealed quietly in a February 2015 Privacy Impact Assessment (PIA), means that if you ever have your fingerprints taken for licensing or for a background check, they will most likely end up living indefinitely in the FBI’s NGI database. They’ll be searched thousands of times a day by law enforcement agencies across the country—even if your prints didn’t match any criminal records when they were first submitted to the system.

This is the first time the FBI has allowed routine criminal searches of its civil fingerprint data. Although employers and certifying agencies have submitted prints to the FBI for decades, the FBI says it rarely retained these non-criminal prints. And even when it did retain prints in the past, they “were not readily accessible or searchable.” Now, not only will these prints—and the biographical data included with them—be available to any law enforcement agent who wants to look for them, they will be searched as a matter of course along with all prints collected for a clearly criminal purpose (like upon arrest or at time of booking).

In its PIA explaining the move, FBI boasts that this will serve as “an ‘ongoing’ background check that permits employers, licensors, and other authorized entities to learn of criminal conduct by a trusted individual.” To suggest that a massive database of fingerprints can provide the FBI real-time updates on certain behaviors, but pretend it wouldn’t serve a similar purpose to the Chinese, defies logic. Heck, why is OPM keeping fingerprint information if it can’t be used? And of course, all that assumes none of the 5.6 million people affected has a fingerprint-authenticating iPhone.

Of course this can be used, otherwise the Chinese wouldn’t have gone out of their way to get it!

But OPM’s claim that the Chinese just went out of their way to get that fingerprint data for no good reason provides the agency with a way to delay notification while FBI, DHS, DOD and “other members of the Intelligence Community” come up with ways to limit the damage of this.

If, in the future, new means are developed to misuse the fingerprint data, the government will provide additional information to individuals whose fingerprints may have been stolen in this breach.

After which OPM spends two paragraphs talking about the identity protection those whose identities have been stolen will get, as if that mitigates a huge counterintelligence problem.

It sure sounds like OPM is stalling on informing the people who’ve been exposed about how badly they’ve been exposed, under the incredible claim that databases of fingerprints aren’t all that useful.

National Counterintelligence Director Evanina about OPM Breach: “Not My Job”

I’ve been tracking Ron Wyden’s efforts to learn whether the National Counterintelligence and Security Center had anticipated how much of a counterintelligence bonanza the Office of Personnel Management’s databases would be. Wyden sent National Counterintelligence Executive William Evanina a set of questions last month.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

Evanina just responded. His answer to the first two questions was basically, “Not my job.”

In response to the first two questions, under the statutory structure established by the Federal Information Security Management Act of 2002 (FISMA), as amended, executive branch oversight of agency information security policies and practices rests with the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS). For agencies with Inspectors General (IG) appointed under the Inspector General Act of 1978 (OPM is one of those agencies), independent annual evaluations of each agency’s adherence to the instructions of OMB and DHS are carried out by the agency’s IG or an independent external auditor chosen by the agency’s IG. These responsibilities are discussed in detail in OMB’s most recent annual report to Congress on FISMA implementation. The statutory authorities of the National Counterintelligence Executive, which is part of the NCSC, do not include either identifying information technology (IT) vulnerabilities to agencies or providing recommendations on how to security their IT systems.

Of course, this doesn’t really answer the question, which is whether Evanina — or the NCSC generally — had identified OPM’s database full of clearance information as a critical CI asset. Steven Aftergood has argued it should have been, according to the Office of Director of National Intelligence’s definition if not bureaucratic limits. Did the multiple IG reports showing OPM was vulnerable, going back to 2009 and continuing until this year, register on NCSC’s radar?

I’m guessing, given Evanina’s silence on that issue, the answer is no.

No, the folks in charge of CI didn’t notice that this database of millions of clearance holders’ records might be a juicy intelligence target. Not his job to notice.

Evanina’s response to the third question — whether the government really had to keep records going back to Reagan’s second term — was no more satisfying.

[T]he timelines for retention of personnel security files were established by the National Archives General Records Schedule 18, Item 22 (September 2014). While it is possible that we may incur certain vulnerabilities with the retention of background investigation information over a significant period of time, its retention has value for personnel security purposes. The ability to assess the “whole person” over a long period of time enables security clearance adjudicators to identify and address any issues (personnel security or counterintelligence-related) that may exist or may arise.

In other words, just one paragraph after having said it’s not his job to worry about the CI implications of keeping 21 million clearance holders’ records in a poorly secured database, the Counterintelligence Executive said the government needed to keep those records (because the government passed a policy deciding they’d keep those just a year ago) for counterintelligence purposes.

In a statement on the response, Wyden, like me, reads it as Evanina insisting this key CI role is not his job. To which Wyden adds, putting more data in the hands of these insecure agencies under CISA would only exacerbate this problem.

The OPM breach had a huge counterintelligence impact and the only response by the nation’s top counterintelligence officials is to say that it wasn’t their job. This is a bureaucratic response to a massive counter-intelligence failure and unworthy of individuals who are being trusted to defend America. While the National Counterintelligence and Security Center shouldn’t need to advise agencies on how to improve their IT security, it must identify vulnerabilities so that the relevant agencies can take the necessary steps to secure their data.

The Senate is now trying to respond to the OPM hack by passing a bill that would lead to more personal information being shared with these agencies. The way to improve cybersecurity is to ensure that network owners take responsibility for plugging security holes, not encourage the sharing of personal information with agencies that can’t protect it adequately.

Somehow, the government kept a database full of some of its most important secrets on an insecure server, and the guy in charge of counterintelligence can only respond that we had to do that to serve counterintelligence purposes.

Admiral Mike Rogers Virtually Confirms OPM Was Not on Counterintelligence Radar

For some time, those following the OPM hack have been asking where the intelligence community’s counterintelligence folks were. Were they aware of what a CI bonanza the database would present for foreign governments?

Lawfare’s Ben Wittes has been asking it for a while. Ron Wyden got more specific in a letter to the head of the National Counterintelligence and Security Center last month.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

And Steven Aftergood, analyzing a 2013 Intelligence Community Directive released recently, noted that the OPM database should have been considered a critical counterintelligence asset.

A critical asset is “Any asset (person, group, relationship, instrument, installation, process, or supply at the disposition of an organization for use in an operational or support role) whose loss or compromise would have a negative impact on the capability of a department or agency to carry out its mission; or may have a negative impact on the ability of another U.S. Government department or agency to conduct its mission; or could result in substantial economic loss; or which may have a negative impact on the national security of the U.S.”

By any reasonable definition, the Office of Personnel Management database of security clearance background investigations for federal employees and contractors that was recently compromised by a foreign adversary would appear to qualify as a “critical asset.” But since OPM is not a member or an element of the Intelligence Community, it appears to fall outside the scope of this directive.

But in a private event at the Wilson Center last night, NSA Director Mike Rogers described NSA being brought in to help OPM — but only after OPM had identified the hack.

After the intrusion, “as we started more broadly to realize the implications of OPM, to be quite honest, we were starting to work with OPM about how could we apply DOD capability, if that is what you require,” Rogers said at an invitation-only Wilson Center event, referring to his role leading CYBERCOM.

NSA, meanwhile, provided “a significant amount of people and expertise to OPM to try to help them identify what had happened, how it happened and how we should structure the network for the future,” Rogers added.

That “as we started more broadly to realize the implications of OPM” is the real tell, though. It sure sounds like the Chinese were better able to understand the value of a database containing the security clearance portfolios on many government personnel then our own counterintelligence people.


The Questions the NCSC Doesn’t Want to Answer

A few days ago the WaPo published a story on the OPM hack, focusing (as some earlier commentary already has) on the possibility China will alter intelligence records as part of a way to infiltrate agents or increase distrust.

It’s notable because it relies on the Director of the National Counterintelligence and Security Center, Bill Evanina. The article first presents his comments about that nightmare scenario — altered records.

“The breach itself is issue A,” said William “Bill” Evanina, director of the federal National Counterintelligence and Security Center. But what the thieves do with the information is another question.

“Certainly we are concerned about the destruction of data versus the theft of data,” he said. “It’s a different type of bad situation.” Destroyed or altered records would make a security clearance hard to keep or get.

And only then relays Evanina’s concerns about the more general counterintelligence concerns raised by the heist, that China will use the data to target people for recruitment. Evanina explains he’s more worried about those without extensive operational security training than those overseas who have that experience.

While dangers from the breach for intelligence community workers posted abroad have “the highest risk equation,” Evanina said “they also have the best training to prevent nefarious activity against them. It’s the individuals who don’t have that solid background and training that we’re most concerned with, initially, to provide them with awareness training of what can happen from a foreign intelligence service to them and what to look out for.”

Using stolen personal information to compromise intelligence community members is always a worry.

“That’s a concern we take seriously,” he said.

Curiously, given his concern about those individuals without a solid CI background, Evanina provides no hint of an answer to the questions posed to him in a Ron Wyden letter last week.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

Evanina has asserted he’s particularly worried about the kind of people who would have clearance but not be in one of the better protected (CIA) databases. But was he particularly worried about those people — and therefore OPM’s databases — before the hack?

1 2 3 20
Emptywheel Twitterverse
emptywheel @josephfcox And they still haven't IDed the two who used fake passports.
emptywheel @josephfcox Except one of them was thinking additional attacks.
emptywheel @Ethan_Booker Harbaugh swore he was going to raise the standards for them (except for being nice to his wife). No evidence of that tho.
emptywheel @downi94 They seem to be prepared to test that.
emptywheel @dandrezner Putin's an expert on sanctions that will do exactly nothing, having been on receiving end of them.
emptywheel @csoghoian This is why USG is careful not to have technologists in agencies that get to boss Congress around.
emptywheel @konklone We're likely to find Paris attackers used breakable Telegram that served mostly to make metadata less accessible. @csoghoian
emptywheel @konklone I've been donning surveillance hawk hat & arguing USG would be better off if everyone used iPhone w/access to metadata. @csoghoian
emptywheel Spouse went out and got @fieldandfiregr croissants for our turkey sammiches. Living. #GoBlue
emptywheel @theurbansherpa Well then he better not fumble.
emptywheel How can the team w/a TE named Butt not win this game? #WasntMeantToBeDirty #GetYourMindOutOfTheGutter
emptywheel @theurbansherpa If they even understand it.
November 2015
« Oct