Posts

Richard Burr Wants to Prevent Congress from Learning if CISA Is a Domestic Spying Bill

As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.

So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.

The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.

Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.

Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.

Prevent Congress from learning how often CISA data is used for law enforcement

Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.

In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.

Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.

Prevent Congress from learning how often CISA sharing falls short of the current NSA minimization standard

Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).

But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.

Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.

So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.

Prevent Congress from learning how well the privacy strips — at both private sector and government — are working

Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.

Burr and DiFi stripped that entirely.

Prevent Congress from learning how often “defensive measures” cause damage

Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.

Burr and DiFi stripped that really critical measure.

Prevent Congress from learning whether companies are bypassing the preferred sharing method

Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.

Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.

That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.

Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).

But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.

Read more

Share this entry

Sheldon Whitehouse’s Horrible CFAA Amendment Gets Pulled — But Will Be Back in Conference

As I noted yesterday, Ron Wyden objected to unanimous consent on CISA yesterday because Sheldon Whitehouse’s crappy amendment, which makes the horrible CFAA worse, was going to get a vote. Yesterday, it got amended, but as CDT analyzed, it remains problematic and overbroad.

This afternoon, Whitehouse took to the Senate floor to complain mightily that his amendment had been pulled — presumably it was pulled to get Wyden to withdraw his objections. Whitehouse complained as if this were the first time amendments had not gotten a vote, though that happens all the time with amendments that support civil liberties. He raged about the Masters of the Universe who had pulled his amendment, and suggested a pro-botnet conference had forced the amendment to be pulled, rather than people who have very sound reasons to believe the amendment was badly drafted and dangerously expanded DOJ’s authority.

For all Whitehouse’s complaining, though, it’s likely the amendment is not dead. Tom Carper, who as Ranking Member of the Senate Homeland Security Committee would almost certainly be included in any conference on the bill, rose just after Whitehouse. He said if the provision ends up in the bill, “we will conference, I’m sure, with the House and we will have an opportunity to revisit this, so I just hope you’ll stay in touch with those of us who might be fortunate enough to be a conferee.”

Share this entry

CISA Moves: A Summary

This afternoon, Aaron Richard Burr moved the Cyber Intelligence Sharing Act forward by introducing a manager’s amendment that has limited privacy tweaks (permitting a scrub at DHS and limiting the use of CISA information to cyber crimes that nevertheless include to prevent threat to property), with a bunch of bigger privacy fix amendments, plus a Tom Cotton one and a horrible Sheldon Whitehouse one called as non-germane amendments requiring 60 votes.

Other than that, Burr, Dianne Feinstein, and Ron Wyden spoke on the bill.

Burr did some significant goalpost moving. Whereas in the past, he had suggested that CISA might have prevented the Office of Public Management hack, today he suggested CISA would limit how much data got stolen in a series of hacks. His claim is still false (in almost all the hacks he discussed, the attack vector was already known, but knowing it did nothing to prevent the continued hack).

Burr also likened this bill to a neighborhood watch, where everyone in the neighborhood looks out for the entire neighborhood. He neglected to mention that that neighborhood watch would also include that nosy granny type who reports every brown person in the neighborhood, and features self-defense just like George Zimmerman’s neighborhood watch concept does. Worse, Burr suggested that those not participating in his neighborhood watch were had no protection, effectively suggesting that some of the best companies on securing themselves — like Google — were not protecting customers. Burr even suggested he didn’t know anything about the companies that oppose the bill, which is funny, because Twitter opposes the bill, and Burr has a Twitter account.

Feinstein was worse. She mentioned the OPM hack and then really suggested that a series of other hacks — including both the Sony hack and the DDOS attacks on online banking sites that stole no data! — were worse than the OPM hack.

Yes, the Vice Chair of SSCI really did say that the OPM hack was less serious than a bunch of other other hacks that didn’t affect the national security of this country. Which, if I were one of the 21 million people whose security clearance data had been compromised, would make me very very furious.

DiFi also used language that made it clear she doesn’t really understand how the information sharing portal works. She said something like, “Once cyber information enters the portal it will move at machine speed to other federal agencies,” as if a conveyor belt will carry information from DHS to FBI.

Wyden mostly pointed out that this bill doesn’t protect privacy. But he did call out Burr on his goalpost moving on whether the bill would prevent (his old claim) or just limit the damage 0f (his new one) attacks that it wouldn’t affect at all.

Wyden did, however, object to unanimous consent because Whitehouse’s crappy amendment was being given a vote, which led Burr to complain that Wyden wasn’t going to hold this up.

Finally, Burr came back on the floor, not only to bad mouth companies that oppose this bill again (and insist it was voluntary so they shouldn’t care) but also to do what I thought even he wouldn’t do: suggest we need to pass CISA because a 13 year old stoner hacked the CIA Director.

Share this entry

BREAKING: OPM and DOD (Claim They) Don’t Think Fingerprint Databases Are All That Useful

In the most negative news dump released behind the cover of Pope Francis’ skirts, Office of Public Management just announced that rather than previous reports that 1.1 million people had had their fingerprints stolen from OPM’s databases, instead 5.6 million have.

Aside from the big numbers involved, there are several interesting aspects of this announcement.

First, it seems OPM had an archive of records on 4.5 million people, including fingerprint data, they hadn’t realized was there at first.

As part of the government’s ongoing work to notify individuals affected by the theft of background investigation records, the Office of Personnel Management and the Department of Defense have been analyzing impacted data to verify its quality and completeness. During that process, OPM and DoD identified archived records containing additional fingerprint data not previously analyzed.

If, as it appears, this means OPM had databases of key counterintelligence lying around it wasn’t aware of (and therefore wasn’t using), it suggests Ron Wyden’s concern that the government is retaining data unnecessarily is absolutely correct.

Rather bizarrely, upon learning that someone found and went through archived databases to obtain more fingerprint data, “federal experts” claim that “as of now, the ability to misuse fingerprint data is limited.”

As EFF just revealed, since February the FBI has been busy adding fingerprint data it gets when it does when it does background checks on job applicants into its Next Generation Identification database.

Being a job seeker isn’t a crime. But the FBI has made a big change in how it deals with fingerprints that might make it seem that way. For the first time, fingerprints and biographical information sent to the FBI for a background check will be stored and searched right along with fingerprints taken for criminal purposes.

The change, which the FBI revealed quietly in a February 2015 Privacy Impact Assessment (PIA), means that if you ever have your fingerprints taken for licensing or for a background check, they will most likely end up living indefinitely in the FBI’s NGI database. They’ll be searched thousands of times a day by law enforcement agencies across the country—even if your prints didn’t match any criminal records when they were first submitted to the system.

This is the first time the FBI has allowed routine criminal searches of its civil fingerprint data. Although employers and certifying agencies have submitted prints to the FBI for decades, the FBI says it rarely retained these non-criminal prints. And even when it did retain prints in the past, they “were not readily accessible or searchable.” Now, not only will these prints—and the biographical data included with them—be available to any law enforcement agent who wants to look for them, they will be searched as a matter of course along with all prints collected for a clearly criminal purpose (like upon arrest or at time of booking).

In its PIA explaining the move, FBI boasts that this will serve as “an ‘ongoing’ background check that permits employers, licensors, and other authorized entities to learn of criminal conduct by a trusted individual.” To suggest that a massive database of fingerprints can provide the FBI real-time updates on certain behaviors, but pretend it wouldn’t serve a similar purpose to the Chinese, defies logic. Heck, why is OPM keeping fingerprint information if it can’t be used? And of course, all that assumes none of the 5.6 million people affected has a fingerprint-authenticating iPhone.

Of course this can be used, otherwise the Chinese wouldn’t have gone out of their way to get it!

But OPM’s claim that the Chinese just went out of their way to get that fingerprint data for no good reason provides the agency with a way to delay notification while FBI, DHS, DOD and “other members of the Intelligence Community” come up with ways to limit the damage of this.

If, in the future, new means are developed to misuse the fingerprint data, the government will provide additional information to individuals whose fingerprints may have been stolen in this breach.

After which OPM spends two paragraphs talking about the identity protection those whose identities have been stolen will get, as if that mitigates a huge counterintelligence problem.

It sure sounds like OPM is stalling on informing the people who’ve been exposed about how badly they’ve been exposed, under the incredible claim that databases of fingerprints aren’t all that useful.

Share this entry

National Counterintelligence Director Evanina about OPM Breach: “Not My Job”

I’ve been tracking Ron Wyden’s efforts to learn whether the National Counterintelligence and Security Center had anticipated how much of a counterintelligence bonanza the Office of Personnel Management’s databases would be. Wyden sent National Counterintelligence Executive William Evanina a set of questions last month.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

Evanina just responded. His answer to the first two questions was basically, “Not my job.”

In response to the first two questions, under the statutory structure established by the Federal Information Security Management Act of 2002 (FISMA), as amended, executive branch oversight of agency information security policies and practices rests with the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS). For agencies with Inspectors General (IG) appointed under the Inspector General Act of 1978 (OPM is one of those agencies), independent annual evaluations of each agency’s adherence to the instructions of OMB and DHS are carried out by the agency’s IG or an independent external auditor chosen by the agency’s IG. These responsibilities are discussed in detail in OMB’s most recent annual report to Congress on FISMA implementation. The statutory authorities of the National Counterintelligence Executive, which is part of the NCSC, do not include either identifying information technology (IT) vulnerabilities to agencies or providing recommendations on how to secure their IT systems.

Of course, this doesn’t really answer the question, which is whether Evanina — or the NCSC generally — had identified OPM’s database full of clearance information as a critical CI asset. Steven Aftergood has argued it should have been, according to the Office of Director of National Intelligence’s definition if not bureaucratic limits. Did the multiple IG reports showing OPM was vulnerable, going back to 2009 and continuing until this year, register on NCSC’s radar?

I’m guessing, given Evanina’s silence on that issue, the answer is no.

No, the folks in charge of CI didn’t notice that this database of millions of clearance holders’ records might be a juicy intelligence target. Not his job to notice.

Evanina’s response to the third question — whether the government really had to keep records going back to Reagan’s second term — was no more satisfying.

[T]he timelines for retention of personnel security files were established by the National Archives General Records Schedule 18, Item 22 (September 2014). While it is possible that we may incur certain vulnerabilities with the retention of background investigation information over a significant period of time, its retention has value for personnel security purposes. The ability to assess the “whole person” over a long period of time enables security clearance adjudicators to identify and address any issues (personnel security or counterintelligence-related) that may exist or may arise.

In other words, just one paragraph after having said it’s not his job to worry about the CI implications of keeping 21 million clearance holders’ records in a poorly secured database, the Counterintelligence Executive said the government needed to keep those records (because the government passed a policy deciding they’d keep those just a year ago) for counterintelligence purposes.

In a statement on the response, Wyden, like me, reads it as Evanina insisting this key CI role is not his job. To which Wyden adds, putting more data in the hands of these insecure agencies under CISA would only exacerbate this problem.

The OPM breach had a huge counterintelligence impact and the only response by the nation’s top counterintelligence officials is to say that it wasn’t their job. This is a bureaucratic response to a massive counter-intelligence failure and unworthy of individuals who are being trusted to defend America. While the National Counterintelligence and Security Center shouldn’t need to advise agencies on how to improve their IT security, it must identify vulnerabilities so that the relevant agencies can take the necessary steps to secure their data.

The Senate is now trying to respond to the OPM hack by passing a bill that would lead to more personal information being shared with these agencies. The way to improve cybersecurity is to ensure that network owners take responsibility for plugging security holes, not encourage the sharing of personal information with agencies that can’t protect it adequately.

Somehow, the government kept a database full of some of its most important secrets on an insecure server, and the guy in charge of counterintelligence can only respond that we had to do that to serve counterintelligence purposes.

Share this entry

Admiral Mike Rogers Virtually Confirms OPM Was Not on Counterintelligence Radar

For some time, those following the OPM hack have been asking where the intelligence community’s counterintelligence folks were. Were they aware of what a CI bonanza the database would present for foreign governments?

Lawfare’s Ben Wittes has been asking it for a while. Ron Wyden got more specific in a letter to the head of the National Counterintelligence and Security Center last month.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

And Steven Aftergood, analyzing a 2013 Intelligence Community Directive released recently, noted that the OPM database should have been considered a critical counterintelligence asset.

A critical asset is “Any asset (person, group, relationship, instrument, installation, process, or supply at the disposition of an organization for use in an operational or support role) whose loss or compromise would have a negative impact on the capability of a department or agency to carry out its mission; or may have a negative impact on the ability of another U.S. Government department or agency to conduct its mission; or could result in substantial economic loss; or which may have a negative impact on the national security of the U.S.”

By any reasonable definition, the Office of Personnel Management database of security clearance background investigations for federal employees and contractors that was recently compromised by a foreign adversary would appear to qualify as a “critical asset.” But since OPM is not a member or an element of the Intelligence Community, it appears to fall outside the scope of this directive.

But in a private event at the Wilson Center last night, NSA Director Mike Rogers described NSA being brought in to help OPM — but only after OPM had identified the hack.

After the intrusion, “as we started more broadly to realize the implications of OPM, to be quite honest, we were starting to work with OPM about how could we apply DOD capability, if that is what you require,” Rogers said at an invitation-only Wilson Center event, referring to his role leading CYBERCOM.

NSA, meanwhile, provided “a significant amount of people and expertise to OPM to try to help them identify what had happened, how it happened and how we should structure the network for the future,” Rogers added.

That “as we started more broadly to realize the implications of OPM” is the real tell, though. It sure sounds like the Chinese were better able to understand the value of a database containing the security clearance portfolios on many government personnel then our own counterintelligence people.

Oops.

Share this entry

The Questions the NCSC Doesn’t Want to Answer

A few days ago the WaPo published a story on the OPM hack, focusing (as some earlier commentary already has) on the possibility China will alter intelligence records as part of a way to infiltrate agents or increase distrust.

It’s notable because it relies on the Director of the National Counterintelligence and Security Center, Bill Evanina. The article first presents his comments about that nightmare scenario — altered records.

“The breach itself is issue A,” said William “Bill” Evanina, director of the federal National Counterintelligence and Security Center. But what the thieves do with the information is another question.

“Certainly we are concerned about the destruction of data versus the theft of data,” he said. “It’s a different type of bad situation.” Destroyed or altered records would make a security clearance hard to keep or get.

And only then relays Evanina’s concerns about the more general counterintelligence concerns raised by the heist, that China will use the data to target people for recruitment. Evanina explains he’s more worried about those without extensive operational security training than those overseas who have that experience.

While dangers from the breach for intelligence community workers posted abroad have “the highest risk equation,” Evanina said “they also have the best training to prevent nefarious activity against them. It’s the individuals who don’t have that solid background and training that we’re most concerned with, initially, to provide them with awareness training of what can happen from a foreign intelligence service to them and what to look out for.”

Using stolen personal information to compromise intelligence community members is always a worry.

“That’s a concern we take seriously,” he said.

Curiously, given his concern about those individuals without a solid CI background, Evanina provides no hint of an answer to the questions posed to him in a Ron Wyden letter last week.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

Evanina has asserted he’s particularly worried about the kind of people who would have clearance but not be in one of the better protected (CIA) databases. But was he particularly worried about those people — and therefore OPM’s databases — before the hack?

Share this entry

John Brennan Admits to Lying about Working with Human Rights Abusers

Back in May, I noted that in addition to an unclassified request that John Brennan correct his lies about CIA hacking the Senate Intelligence Committee torture investigators, Ron Wyden, Martin Heinrich, and Mazie Hirono also asked Brennan to correct a lie he told in March.

Additionally, we are attaching a separate classified letter regarding inaccurate public statements that you made on another topic in March 2015. We ask that you correct the public record regarding these statements immediately.

I suggested that Brennan probably lied in response to a request about working with human rights violators at a public speech at the Council on Foreign Relations.

QUESTION: I’m going to try to stand up. Sarah Leah Whitson, Human Rights Watch. Two days ago, ABC News ran some video and images of psychopathic murderers, thugs in the Iraqi security forces, carrying out beheadings, executions of children, executions of civilians. Human Rights Watch has documented Iraqi militias carrying out ISIS-like atrocities, executions of hundreds of captives and so forth.

And some of the allies in the anti-ISIS coalition are themselves carrying out ISIS-like atrocities, like beheadings in Saudi Arabia, violent attacks on journalists in Saudi Arabia—how do you think Iraqi Sunni civilians should distinguish between the good guys and the bad guys in this circumstance?

BRENNAN: It’s tough sorting out good guys and bad guys in a lot of these areas, it is. And human rights abuses, whether they take place on the part of ISIL or of militias or individuals who are working as part of formal security services, needs to be exposed, needs to be stopped.

And in an area like Iraq and Syria, there has been some horrific, horrific human rights abuses. And this is something that I think we need to be able to address. And when we see it, we do bring it to the attention of authorities. And we will not work with entities that are engaged in such activities.

(I even noted in real time he was refusing to respond to the part of the question about the Saudis.)

Brennan has now responded (Ali Watkins first reported on the letter on Friday). As part of his response, he admits that, contrary to his claim at CFR that “we will not work with entities that are engaged” in human rights abuses, in fact the CIA does — “because of critical intelligence those services provide.”

I understand your concerns about my brief, extemporaneous remarks. While we neither condone nor participate in activities that violate human rights standards, we do maintain corporative liaison relationships with a variety of intelligence and security services around the world, some of whose constituent entities have engaged in human rights abuses. We strive to identify and, where possible, avoid working with individuals whom we believe to be responsible for such abuses. In some cases, we have decided to continue those relationships, despite unacceptable behavior, because of the critical intelligence those services provide, including information that allows us to disrupt terrorist plotting against the United States.

Mind you, his letter implies that his response pertains only to “Iraqi security forces,” and not — as was part of the original question, but not one Brennan even acknowledged in his response — our allies the head-chopping Saudis.

I would suggest, however, that the Saudis are far better described as a service that provides us critical intelligence, “including information that allows us to disrupt terrorist plotting against the US.” I’d frankly be shocked if Iraqi security forces even have that capability, not to mention that the terrorists in Iraq are pretty focused on setting up their caliphate in Iraq right now, not attacking the US.

So kudos to John Brennan for owning up to the general lie, that “we will not work with entities that are engaged in” human rights abuses, even if in owning up to it, the old Riyadh Station Chief is still protecting his buddies the Sauds.

Baby steps, I guess.

Share this entry

Feinstein Wants to Introduce Reporting Mandate Jim Comey Says We Don’t Need

I’ll have a piece in Salon shortly about the two hearings on whether FBI should be able to mandate back doors (they call them front doors because that fools some Senators about the security problems with that) in software.

One thing not in there, however, has to do with a bill the Senate Intelligence Committee is considering that would require Facebook and Twitter and other social media to report terrorist content to authorities. ABC News, quoting Richard Clarke (who hasn’t had an official role in government for some years but is on ABC’s payroll) reported that the social media companies were not now reporting terrorist content.

In the middle of the SSCI hearing on this topic, Dianne Feinstein asked Jim Comey whether social media companies were reporting such content. Comey said they are (he did say they’ve gotten far better of late). Feinstein asked whether there ought to be a law anyway, to mandate behavior the companies are already doing. Comey suggested it wasn’t necessary. Feinstein said maybe they should mandate it anyway, like they do for child porn.

All of which made it clear that such a law is unnecessary, even before you get into the severe problems with the law (such as defining who is a terrorist and what counts as terrorist content).

SSCI will probably pass it anyway, because that’s how they respond to threats of late: by passing legislation that won’t address it.

Note, Feinstein also got visibly and audibly and persistently pissed at Ron Wyden for accurately describing what Deputy Attorney General Sally Yates had said she wanted in an earlier hearing: for providers to have keys that the FBI could use. Feinstein seems to believe good PR will eliminate all the technical problems with a back door plan, perhaps because then she won’t be held responsible for making us less secure as a result.

Update: The measures is here, in the Intelligence Authorization.

Update: Title changed for accuracy.

Share this entry

The Timing of the Contemplated Upstream Cyber-Grab

There’s an aspect missing thus far from the discussion of NSA’s possible bid for a cyber certification under Section 702 for primary use in the collection of attack signatures that could not be attributed to a foreign government.

The timing.

The discussion of creating a new Section 702 certificate came in the aftermath of the 6-month back and forth between DOJ and the FISA Court over NSA having collected US person data as part of its upstream collection (for more detail than appears in the timeline below, see this post). During that process, John Bates ruled parts of the program — what he deemed the intentional collection of US person data within the US — to be unconstitutional. That part of his opinion is worth citing at length, because of the way Bates argues that the inability to detach entirely domestic communications that are part of a transaction does not mean that those domestic communications were “incidentally” collected. Rather, they were “intentionally” collected.

Specifically, the government argues that NSA is not “intentionally” acquiring wholly domestic communications because the government does not intend to acquire transactions containing communications that are wholly domestic and has implemented technical means to prevent the acquisition of such transactions. See June 28 Submission at 12. This argument fails for several reasons.

NSA targets a person under Section 702 certifications by acquiring communications to, from, or about a selector used by that person. Therefore, to the extent NSA’s upstream collection devices acquire an Internet transaction containing a single, discrete communication that is to, from, or about a tasked selector, it can hardly be said that NSA’s acquisition is “unintentional.” In fact, the government has argued, that the Court has accepted, that the government intentionally acquires communications to and from a target, even when NSA reasonably — albeit mistakenly — believes that the target is located outside the United States. See Docket No. [redacted]

[snip]

The fact that NSA’s technical measures cannot prevent NSA from acquiring transactions containing wholly domestic communications under certain circumstances does not render NSA’s acquisition of those transactions “unintentional.”

[snip]

[T]here is nothing in the record to suggest that NSA’s technical means are malfunctioning or otherwise failing to operate as designed. Indeed, the government readily concedes that NSA will acquire a wholly domestic “about” communication if the transaction containing the communication is routed through an international Internet link being monitored by NSA or is routed through a foreign server.

[snip]

By expanding its Section 702 acquisitions to include the acquisition of Internet transactions through its upstream collection, NSA has, as a practical matter, circumvented the spirit of Section 1881a(b)(4) and (d)(1) with regard to that collection. (44-45, 48)

There are a number of ways to imagine that victim-related data and communications obtained with an attack signature might be considered “intentional” rather than “incidental,” especially given the Snowden document acknowledging that so much victim data gets collected it should be segregated from regular collection. Add to that the far greater likelihood that the NSA will unknowingly target domestic hackers — because so much of hacking involves obscuring attribution — and the likelihood upstream collection targeting hackers would “intentionally” collect domestic data is quite high.

Plus, there’s nothing in the 2011 documents released indicating the FISC knew upstream collection included cyber signatures — and related victim data — in spite of the fact that “current Certifications already allow for the tasking of these cyber signatures.” No unredacted section discussed the collection of US person data tied to the pursuit of cyberattackers that appears to have been ongoing by that point.

Similarly, the white paper officially informing Congress about 702 didn’t mention cyber signatures either. There’s nothing public to suggest it did so after the Senate rejected a Cybersecurity bill in August, 2012, either. That bill would have authorized less involvement of NSA in cybersecurity than appears to have already been going on.

With all that in mind, consider the discussions reflected in the documents released last week. The entire discussion to use FBI’s stated needs to apply as backup to apply for a cyber certificate came at the same time as NSA is trying to decide what to do with the data it illegally collected. Before getting that certificate, DOJ approved the collection of cyber signatures under other certificates. It seems likely that this collection would violate the spirit of the ruling from just the prior year.

And NSA’s assistance to FBI may have violated the prior year’s orders in another way. SSO contemplated delivering all this data directly to FBI.

Screen Shot 2015-06-11 at 9.42.56 AM

Yet one of the restrictions imposed on upstream collection — voluntarily offered up by DOJ — was that no raw data from NSA’s upstream collection go to FBI (or CIA). If there was uncertainty where FBI’s targeting ended and NSA’s began, this would create a violation of prior orders.

Meanwhile, the reauthorization process had already started, and as part of that (though curiously timed to coincide with the release of DOJ’s white paper on 702 collection) Ron Wyden and Mark Udall were trying to force NSA to figure out how much US person data they were collecting. Not only did the various Inspectors General refuse to count that data (which would have, under the logic of Bates’ opinions finding that illegally collected data was only illegal if the government knew it was US person data, made the data illegal), but the Senate Intelligence Committee refused to consider reconstituting their Technical Advisory Committee which might be better able to assess whether NSA claims were correct.

Sometime in that period, just as Wyden was trying to call attention to the fact that NSA was collecting US person data via its upstream collection, NSA alerted the Intelligence Committees to further “overcollection” under upstream collection.

2012 Upstream Notice

As I suggested here, the length of the redaction and mention of “other authorities” may reflect the involvement of another agency like FBI. One possibility, given the description of FBI collecting on cyber signatures using both PRTT and (presumably) traditional FISA in the discussions of SSO helping the FBI conduct this surveillance (note, I find it interesting though not conclusive that there is no mention of Section 215 to collect cybersecurity data), is that the initial efforts to go after these signatures in some way resulted in overcollection. If FISC interpreted victim-related data to be overcollection — as would be unsurprising under Bates’ 2011 upstream opinion — then it would explain the notice to Congress.

One more point. In this post, I noted that USA F-ReDux authorized FISC to let the government use data it had illegally collected but which FISC had authorized by imposing additional minimization procedures. It’s just a wildarseguess, but I find it plausible that this 2012 overcollection involved cyber signatures (because we know NSA was collecting it and there is reason to believe it violated Bates’ 2011 opinion), and that any victim data now gets treated under minimization procedures and therefore that any illegal data from 2012 may now, as of last week, be used.

All of which is to say that the revelation of NSA and FBI’s use of upstream collection to target hackers involves far more legal issues than commentary on the issue has made out. And these legal issues may well have been more appropriate for the government to reveal before passage of USA F-ReDux.

Update, 11/6: Some dates added from this opinionRead more

Share this entry