Working Thread, Cybersecurity Act

As I’ve been reporting, Paul Ryan added a version of the Cybersecurity Information Sharing Act to the omnibus. It starts on page 1728. This will be my working thread.

(1745) They’ve changed what gets stripped from “person” to “individual,” thereby not requiring that corporate names get stripped.

(1747) The bill takes out CISA’s requirement of getting authorization before using an indicator for law enforcement.

(1753) The language on ensuring there are audit capabilities (but not that they’re used) takes out this language, which was in CISA.

C) consistent with this title, any other applicable provisions of law, and the fair information practice principles set forth in appendix A of the document entitled “National Strategy for Trusted Identities in Cyberspace” and published by the President in April, 2011, govern the retention, use, and dissemination by the Federal Government of cyber threat indicators shared with the Federal Government under this title, including the extent, if any, to which such cyber threat indicators may be used by the Federal Government; and

(1754) This section replaced an “or” in CISA with the underlined “and,” which I think sharply constrains the list of stuff that shouldn’t be shared. (It also replaces “person” with “individual” as consistent with other changes.)

(i) Identification of types of information that would qualify as a cyber threat indicator under this title that would be unlikely to include information that—

(I) is not directly related to a cybersecurity threat; and

(II) is personal information of a specific individual or information that identifies a specific individual.

(1755) OmnibusCISA requires the AG to make both the interim and final privacy guidelines public; CISA had only made interim ones public.

jointly issue and make publicly available final guidelines

(1760) The clause noting that other info sharing is still permissible adds the underlined language.

(i) reporting of known or suspected criminal activity, by a non-Federal entity to any other non-Federal entity or a Federal entity, including cyber threat indicators or defensive measures shared with a Federal entity in furtherance of opening a Federal law enforcement investigation;

(1761-2) The bill basically gives DHS 90 days (60, really) to set up its portal before the President can declare the need to set up a competing one. This also involves slightly different timing on notice to Congress of whether DHS manages to pull it together in 90 days.

IN GENERAL.—At any time after certification is submitted under subparagraph (A), the President may designate an appropriate Federal entity, other than the Department of Defense (including the National Security Agency), to develop and implement a capability and process as described in paragraph (1) in addition to the capability and process developed under such paragraph by the Secretary of Homeland Security, if, not fewer than 30 days before making such designation, the President submits to Congress a certification and explanation that—

(I) such designation is necessary to ensure that full, effective, and secure operation of a capability and process for the Federal Government to receive from any non-Federal entity cyber threat indicators or defensive measures under this title;

 (1766) The OmniCISA is slightly better on threat of death sharing as it must be specific.

(iii) the purpose of responding to, or otherwise preventing or mitigating, a specific threat of death, a specific threat of serious bodily harm, or a specific threat of serious economic harm, including a terrorist act or a use of a weapon of mass destruction;

(1768-9) Wow. The regulatory exception is even bigger than it was under CISA. Here’s what CISA said (underline added in both):

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this title shall not be directly used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any entity, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

And here’s what OmniCISA says:

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this title shall not be  used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any non-Federal entity or any activities taken by a non-Federal entity pursuant to mandatory standards, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

(1771) The Rule of Construction is more permissive in OmniCISA, too. Compare CISA:

(c) Construction.—Nothing in this section shall be construed—

(1) to require dismissal of a cause of action against an entity that has engaged in gross negligence or willful misconduct in the course of conducting activities authorized by this title; or

With OmniCISA.

CONSTRUCTION.—Nothing in this title shall be construed—

(1) to create—

(A) a duty to share a cyber threat indicator or defensive measure; or

(B) a duty to warn or act based on the receipt of a cyber threat indicator or defensive measure; or

Whereas CISA still permitted the government to pursue a company for gross negligence, OmniCISA instead makes clear that companies can ignore cyber information they get shared from the government.

(1771) I’m going to circle back and compare the various oversight reporting from all four bills in more detail. But the big takeaway is that they’ve stripped a PCLOB review from all 3 of the underlying bills.

(1782) I’m not sure what this new language does. A lawyer who works in this area thinks it protects Brady obligations. I hope he’s right and it’s not, instead, a way to eat limits on the use for prosecution.

(n) CRIMINAL PROSECUTION.—Nothing in this title shall be construed to prevent the disclosure of a cyber threat indicator or defensive measure shared under this title in a case of criminal prosecution, when an applicable provision of Federal, State, tribal, or local law requires disclosure in such case.

(1783) In a (long-overdue) report on how to deal with hacking, OmniCISA takes out a report on this topic specifically done for the Foreign Relations Committee, suggesting this information will remain classified and potentially unavailable to the committees. I guess they have to hide Israel’s spying.

(2) A list and an assessment of the countries and nonstate actors that are the primary threats of carrying out a cybersecurity threat, including a cyber attack, theft, or data breach, against the United States and which threaten the United States national security, economy, and intellectual property.

(1785) This is the sunset language. It doesn’t seem to sunset anything.

(a) IN GENERAL.—Except as provided in subsection 3 (b), this title and the amendments made by this title shall be effective during the period beginning on the date of the enactment of this Act and ending on September 30, 2025.

Share this entry

“Encryption” Is Just Intel Code for “Failure to Achieve Omniscience”

After receiving a briefing on the San Bernardino attack, Richard Burr went out and made two contradictory claims. First, Burr — and or other sources for The Hill — said that there was no evidence the Tashfeen Malik and Syed Rizwan Farook used encryption.

Lawmakers on Thursday said there was no evidence yet that the two suspected shooters used encryption to hide from authorities in the lead-up to last week’s San Bernardino, Calif., terror attack that killed 14 people.

“We don’t know whether it played a part in this attack,” Senate Intelligence Committee Chairman Richard Burr (R-N.C.) told reporters following a closed-door briefing with federal officials on the shootings.

That’s consistent with what we know so far. After all, a husband and wife wouldn’t need to — or have a way of — encrypting their communications with each other, as it would be mostly face-to-face. The fact that they tried to destroy their devices (and apparently got rid of a still undiscovered hard drive) suggests they weren’t protecting that via encryption, but rather via physical destruction. That doesn’t rule out using both, but the FBI would presumably know if the devices they’re reconstructed were encrypted.

So it makes sense that the San Bernardino attacks did not use encryption.

But then later in the same discussion with reporters, Burr suggested Malik and Farook must have used encryption because the IC didn’t know about their attack.

Burr suggested it might have even played a role in the accused San Bernardino shooters — Tashfeen Malik and Syed Rizwan Farook — going unnoticed for years, despite the FBI saying they had been radicalized for some time.

“Any time you glean less information at the beginning, clearly encryption probably played a role in it,” he said. “And there were a lot of conversations that went on between these two individuals before [Malik] came to the United States that you would love to have some insight to other than after an attack took place.”

This is a remarkable comment!

After all, the FBI and NSA don’t even read all the conversations of foreigners, as Malik would still have legally been, that they can. Indeed, if these conversations were in Arabic or Urdu, the IC would only have had them translated if there were some reason to find them interesting. And even in spite of the pair’s early shooting training, it’s not apparent they had extensive conversations, particularly not online, to guide that training.

Those details would make it likely that the IC would have had no reason to be interested. To say nothing of the fact that ultimately “radicalization” is a state of mind, and thus far, NSA doesn’t have a way to decrypt thoughts.

But this is the second attack in a row, with Paris, where Burr and others have suggested that their lack of foreknowledge of the attack makes it probable the planners used encryption. Burr doesn’t even seem to be considering a number of other things, such as good operational security, languages, and metadata failures might lead the IC to miss warning signs, even assuming they’re collecting everything (there should have been no legal limits to their ability to collect on Malik).

We’re not having a debate about encryption anymore. We’re debating making the Internet less secure to excuse the IC’s less-than-perfect-omniscience.

Share this entry

Dianne Feinstein’s Encrypted Playstation Nightmare

I’ve complained about Dianne Feinstein’s inconsistency on cybersecurity, specifically as it relates to Sony, before. The week before the attack on Paris, cybersecurity was the biggest threat, according to her. And Sony was one of the top targets, both of criminal identity theft and — if you believe the Administration — nation-states like North Korea. If you believe that, you believe that Sony should have the ability to use encryption to protect its business and users. But, in the wake of Paris and Belgian Interior Minister Jan Jambon’s claim that terrorists are using Playstations, Feinstein changed her tune, arguing serial hacking target Sony should not be able to encrypt its systems to protect users.

Her concerns took a bizarre new twist in an FBI oversight hearing today. Now, she’s concerned that if a predator decides to target her grandkids while they’re playing on a Playstation, that will be encrypted.

I have concern about a Playstation which my grandchildren might use and a predator getting on the other end, talking to them, and it’s all encrypted.

Someone needs to explain to DiFi that her grandkids are probably at greater risk from predators hacking Sony to get personal information about them to then use that to abduct or whatever them.

Sony’s the perfect example of how security hawks like Feinstein need to choose: either her grandkids face risks because Sony doesn’t encrypt its systems, or they do because it does.

The former risk is likely the much greater risk.

Share this entry

In One of His First Major Legislative Acts, Paul Ryan Trying to Deputize Comcast to Narc You Out to the Feds

Screen Shot 2015-12-07 at 7.53.31 PMAs the Hill reports, Speaker Paul Ryan is preparing to add a worsened version of the Cybersecurity Information Sharing Act to the omnibus budget bill, bypassing the jurisdictional interests of Homeland Security Chair Mike McCaul in order to push through the most privacy-invasive version of the bill.

But several people tracking the negotiations believe McCaul is under significant pressure from House Speaker Paul Ryan (R-Wis.) and other congressional leaders to not oppose the compromise text.

They said lawmakers are aiming to vote on the final cyber bill as part of an omnibus budget deal that is expected before the end of the year.

As I laid out in October, it appears CISA — even in the form that got voted out of the Senate — would serve as a domestic “upstream” spying authority, providing the government a way to spy domestically without a warrant.

CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.

We recently got an idea of how this might work. Comcast is basically hacking its own users to find out if they’re downloading copyrighted material.

[Comcast] has been accused of tapping into unencrypted browser sessions and displaying warnings that accuse the user of infringing copyrighted material — such as sharing movies or downloading from a file-sharing site.

That could put users at risk, says the developer who discovered it.

Jarred Sumner, a San Francisco, Calif.-based developer who published the alert banner’s code on his GitHub page, told ZDNet in an email that this could cause major privacy problems.

Sumner explained that Comcast injects the code into a user’s browser as they are browsing the web, performing a so-called “man-in-the-middle” attack. (Comcast has been known to alert users when they have surpassed their data caps.) This means Comcast intercepts the traffic between a user’s computer and their servers, instead of installing software on the user’s computer.

[snip]

“This probably means that Comcast is using [deep packet inspection] on subscriber’s internet and/or proxying subscriber internet when they want to send messages to subscribers,” he said. “That would let Comcast modify unencrypted traffic in both directions.”

In other words, Comcast is already doing the same kind of deep packet inspection of its users’ unencrypted activity as the telecoms use in upstream collection for the NSA. Under CISA, they’d be permitted — and Comcast sure seems willing — to do such searches for the Feds.

Some methods of downloading copyrighted content might already be considered a cyberthreat indicator that Comcast could report directly to the Federal government (and possibly, under this latest version, directly to the FBI). And there are reports that the new version will adopt an expanded list of crimes, to include the Computer Fraud and Abuse Act.

In other words, it’s really easy to see how under this version of CISA, the government would ask Comcast to hack you to find out if you’re doing one of the long list of things considered hacking — a CFAA violation — by the Feds.

How’s that for Paul Ryan’s idea of conservatism, putting the government right inside your Internet router as one of his first major legislative acts?

Share this entry

Internet of Things: Now, with ‘Breachable’ Kids Connect and ‘Hackable’ Barbie

HelloBarbie

[graphic: Hello Barbie via Mattel’s website]

The Internet of Things (IoT) already includes refrigerators, televisions, slow cookers, automobiles, you name it. Most of these items have already experienced security problems, whether personal information leaks, or manipulative hacking.

Now the IoT includes toys — and wow, what a surprise! They’re riddled with privacy and security problems, too.

Like VTech’s privacy breach, exposing data for more than 6 million children and parents including facial photos and chat logs through its Kids Connect technology. The company’s privacy policy (last archived copy) indicated communications would be encrypted, but the encryption proved whisper thin.

Or Mattel’s Hello Barbie, its Wi-Fi enabled communications at risk for hacking and unauthorized surveillance. The flaws include this doll’s ability to connect to any Wi-Fi network named “Barbie” — it was absolutely brain-dead easy to spoof and begin snooping on anything this doll could “hear.”

It’s amazing these manufacturers ever thought these toys were appropriate for the marketplace, given their target audience. In VTech’s case, it appears to be nearly all ages (its Android app on Google Play is unrated), and in the case of Mattel’s Hello Barbie, it’s primarily girls ages 6-15.

These devices are especially iffy since they tippy-toe along the edge of the Children’s Online Privacy Protection Act of 1998 (a.k.a. COPPA, 15 U.S.C. 6501–6505).

Parents share much of the blame, too. Most have no clue what or how federal law covers children’s internet use under COPPA, or requirements under the Children’s Internet Protection Act (a.k.a. CIPA, 47 CFR 54.520). Nor do the parents who buy these devices appear to grasp this basic fact: any network-mediated or Wi-Fi toy, apart from the obvious cellphone/tablet/PC, is at implicit risk for leaking personal data or hackable. How are these devices risking exposure of children’s data, including their activities and location, age-appropriate toys?

This piece at Computerworld has a few helpful suggestions. In my opinion, the IoT doesn’t belong in your kids’ toybox until your kids are old enough to understand and manage personal digital information security to use the internet safely.

Frankly, many parents aren’t ready for safe internet use.

Share this entry

Dianne Feinstein Inadvertently Calls to Expose America’s Critical Infrastructure to Hackers

For days now, surveillance hawks have been complaining that terrorists probably used encryption in their attack on Paris last Friday. That, in spite of the news that authorities used a phone one of the attackers threw in a trash can to identify a hideout in St. Denis (this phone in fact might have been encrypted and brute force decrypted, but given the absence of such a claim and the quick turnaround on it, most people have assumed both it and the pre-attack chats on it were not encrypted).

I suspect we’ll learn attackers did use encryption (and a great deal of operational security that has nothing to do with encryption) at some point in planning their attack — though the entire network appears to have been visible through metadata and other intelligence. Thus far, however, there’s only one way we know of that the terrorists used encryption leading up to the attack: when one of them paid for things like a hotel online, the processing of his credit card (which was in his own name) presumably took place over HTTPS (hat tip to William Ockham for first making that observation). So if we’re going to blindly demand we prohibit the encryption the attackers used, we’re going to commit ourselves to far far more hacking of online financial transactions.

I’m more interested in the concerns about terrorists’ claimed use of PlayStation 4. Three days before the attack, Belgium’s Interior Minister, said all countries were having problem with PlayStation 4s, which led to a frenzy mistakenly claiming the Paris terrorists had used it (there’s far more reason to believe they used Telegram).

One of those alternatives was highlighted on Nov. 11, when Belgium’s federal home affairs minister, Jan Jambon, said that a PlayStation 4 (PS4) console could be used by ISIS to communicate with their operatives abroad.

“PlayStation 4 is even more difficult to keep track of than WhatsApp,” said Jambon, referencing to the secure messaging platform.

Earlier this year, Reuters reported that a 14-year-old boy from Austria was sentenced to a two-year jail term after he downloaded instructions on bomb-building onto his Playstation games console, and was in contact with ISIS.

It remains unclear, however, how ISIS would have used PS4s, though options range from the relatively direct methods of sending messages to players or voice-chatting, to more elaborate methods cooked up by those who play games regularly. Players, for instance, can use their weapons during a game to send a spray of bullets onto a wall, spelling out whole sentences to each other.

This has DiFi complaining that Playstation is encrypted.

Even Playstation is encrypted. It’s very hard to get the data you need because it’s encrypted

Thus far, it’s not actually clear most communications on Playstation are encrypted (though players may be able to pass encrypted objects about); most people I’ve asked think the communications are not encrypted, though Sony isn’t telling. What is likely is that there’s not an easy way to collect metadata tracking the communications within games, which would make it hard to collect on whether or not some parts of the communications data are encrypted.

But at least one kind of data on Playstations — probably two — is encrypted: Credit cards and (probably) user data. That’s because 4 years ago, Playstation got badly hacked.

“The entire credit card table was encrypted and we have no evidence that credit card data was taken,” said Sony.

This is the slimmest amount of good news for PlayStation Network users, but it alone raises very serious concerns, since Sony has yet to provide any details on what sort of encryption has been used to protect that credit card information.

As a result, PlayStation Network users have absolutely no idea how safe their credit card information may be.

But the bad news keeps rolling in:

“The personal data table, which is a separate data set, was not encrypted,” Sony notes, “but was, of course, behind a very sophisticated security system that was breached in a malicious attack.”

A very sophisticated security system that ultimately failed, making it useless.

Why Sony failed to encrypt user account data is a question that security experts have already begun to ask. Along with politicians both in the United States and abroad.

Chances are Sony’s not going to have an answer that’s going to please anyone.

After one in a series of really embarrassing hacks, I assume Sony has locked things down more since. Three years after that Playstation hack, of course, Sony’s movie studio would be declared critical infrastructure after it also got hacked.

Here’s the thing: Sony is the kind of serially negligent company that we need to embrace good security if the US is going to keep itself secure. We should be saying, “Encrypt away, Sony! Please keep yourself safe because hackers love to hack you and they’ve had spectacular success doing so! Jolly good!”

But we can’t, at the same time, be complaining that Sony offers some level of encryption as if that makes the company a material supporter of terrorism. Sony is a perfect example of how you can’t have it both ways, secure against hackers but not against wiretappers.

Amid the uproar about terrorists maybe using encryption, the ways they may have — to secure online financial transactions and game player data — should be a warning about condemning encryption broadly.

Because next week, when hackers attack us, we’ll be wishing our companies had better encryption to keep us safe.

Share this entry

DOJ Still Gets a Failing Grade on Strong Authentication

In DOJ’s Inspector General’s annual report on challenges facing the department, Michael Horowitz revealed how well DOJ is complying with the Office of Management and Budget’s directive in the wake of the OPM hack that agencies improve their own cybersecurity, including by adopting strong authentication for both privileged and unprivileged users.

DOJ’s still getting a failing grade on that front — just 64% of users are in compliance with requirements they use strong authentication.

Following OMB’s directive, the White House reported that federal civilian agencies increased their use of strong authentication (such as smartcards) for privileged and unprivileged users from 42 percent to 72 percent. The Justice Department, however, had among the worst overall compliance records for the percentage of employees using smartcards during the third quarter of FY 2015 – though it has since made significant improvements, increasing to 64 percent of privileged and unprivileged users in compliance by the fourth quarter. Given both the very sensitive nature of the information that it controls, and its role at the forefront of the effort to combat cyber threats, the Department must continue to make progress to be a leader in these critical areas.

Ho hum. These are only the databases protecting FBI’s investigations into mobs, terrorists, and hackers. No reason to keep those safe.

In any case, it may be too late, as the Crackas with Attitude already broke into the portal for some of those databases.

Ah well, we’ll just dump more information into those databases under CISA and see if that prevents hackers.

Share this entry

Government (and Its Expensive Contractors) Really Need to Secure Their Data Collections

Given two recent high profile hacks, the government needs to either do a better job of securing its data collection and sharing process, or presume people will get hurt because of it.

After the hackers Crackas With Attitude hacked John Brennan, they went onto hack FBI’s Deputy Director Mark Giuliano as well as a law enforcement portal run by the FBI. The hack of the latter hasn’t gotten as much attention — thus far, WikiLeaks has not claimed to have the data, but upon closer examination of the data obtained, it appears it might provide clues and contact information about people working undercover for the FBI.

Then, the hackers showed Wired’s Kim Zetter what the portal they had accessed included. Here’s a partial list:

Enterprise File Transfer Service—a web interface to securely share and transmit files.

Cyber Shield Alliance—an FBI Cybersecurity partnership initiative “developed by Law Enforcement for Law Enforcement to proactively defend and counter cyber threats against LE networks and critical technologies,” the portal reads. “The FBI stewards an array of cybersecurity resources and intelligence, much of which is now accessible to LEA’s through the Cyber Shield Alliance.”

IC3—“a vehicle to receive, develop, and refer criminal complaints regarding the rapidly expanding arena of cyber crime.”

Intelink—a “secure portal for integrated intelligence dissemination and collaboration efforts”

National Gang Intelligence Center—a “multi-agency effort that integrates gang information from local, state, and federal law enforcement entities to serve as a centralized intelligence resource for gang information and analytical support.”

RISSNET—which provides “timely access to a variety of law enforcement sensitive, officer safety, and public safety resources”

Malware Investigator—an automated tool that “analyzes suspected malware samples and quickly returns technical information about the samples to its users so they can understand the samples’ functionality.”

eGuardian—a “system that allows Law Enforcement, Law Enforcement support and force protection personnel the ability to report, track and share threats, events and suspicious activities with a potential nexus to terrorism, cyber or other criminal activity.”

While the hackers haven’t said whether they’ve gotten into these information sharing sites, they clearly got as far as the portal to the tools that let investigators share information on large networked investigations, targeting things like gangs, other organized crime, terrorists, and hackers. If hackers were to access those information sharing networks, they might be able to both monitor investigations into such networked crime groups, but also (using credentials they already hacked) to make false entries. And all that’s before CISA will vastly expand this info sharing.

Meanwhile, the Intercept reported receiving 2.5 years of recorded phone calls — amounting to 70 million recorded calls — from one of the nation’s largest jail phone providers, Securus. Its report focuses on proving that Securus is not defeat-listing calls to attorneys, meaning it has breached attorney-client privilege. As Scott Greenfield notes, that’s horrible but not at all surprising.

But on top of that, the Intercept’s source reportedly obtained these recorded calls by hacking Securus. While we don’t have details of how that happened, that does mean all those calls were accessible to be stolen. If Intercept’s civil liberties-motivated hacker can obtain the calls, so can a hacker employed by organized crime.

The Intercept notes that even calls to prosecutors were online (which might include discussions from informants). But it would seem just calls to friends and associates would prove of interest to certain criminal organizations, especially if they could pinpoint the calls (which is, after all, the point). As Greenfield notes, defendants don’t usually listen to their lawyers’ warnings — or those of the signs by the phones saying all calls will be recorded — and so they say stupid stuff to everyone.

So we tell our clients that they cannot talk about anything on the phone. We tell our clients, “all calls are recorded, including this one.”  So don’t say anything on the phone that you don’t want your prosecutor to hear.

Some listen to our advice. Most don’t. They just can’t stop themselves from talking.  And if it’s not about talking to us, it’s about talking to their spouses, their friends, their co-conspirators. And they say the most remarkable things, in the sense of “remarkable” meaning “really damaging.”  Lawyers only know the stupid stuff they say to us. We learn the stupid stuff they say to others at trial. Fun times.

Again, such calls might be of acute interest to rival gangs (for example) or co-conspirators who have figured out someone has flipped.

It’s bad enough the government left OPM’s databases insecure, and with it sensitive data on 21 million clearance holders.

But it looks like key law enforcement data collections are not much more secure.

Share this entry

Defining Stingray Emergencies … or Not

A couple of weeks ago, ACLU NoCal released more documents on the use of Stingray. While much of the attention focused on the admission that innocent people get sucked up in Stingray usage, I was at least as interested in the definition of an emergency during which a Stingray could be used with retroactive authorization:
Screen Shot 2015-11-08 at 9.27.59 AM

I was interested both in the invocation of organized crime (which would implicate drug dealing), but also the suggestion the government would get a Stingray to pursue a hacker under the CFAA. Equally curiously, the definition here leaves out part of the definition of “protected computer” under CFAA, one used in interstate communication.

(2) the term “protected computer” means a computer—
(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or
(B) which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States;

Does the existing definition of an emergency describe how DOJ has most often used Stingrays to pursue CFAA violations (which of course, as far as we know, have never been noticed to defendants).

Now compare the definition Jason Chaffetz used in his Stingray Privacy Act, a worthwhile bill limiting the use of Stingrays, though this emergency section is the one I and others have most concerns about. Chaffetz doesn’t have anything that explicitly invokes the CFAA definition, and collapses the “threat to national security” and, potentially, the CFAA one into “conspiratorial activities threatening the national security interest.”

(A) such governmental entity reasonably determines an emergency exists that—

(i) involves—

(I) immediate danger of death or serious physical injury to any person;

(II) conspiratorial activities threatening the national security interest; or

(III) conspiratorial activities characteristic of organized crime;

Presumably, requiring conspiratorial activities threatening the national security interest might raise the bar — but would still permit — the use of Stingrays against low level terrorism wannabes. Likewise, while it would likely permit the use of Stingrays against hackers (who are generally treated as counterinteligence threats among NatSec investigators), it might require some conspiracy between hackers.

All that said, there’s a whole lot of flux in what even someone who is often decent on civil liberties like Chaffetz considers a national security threat.

And, of course, in the FISA context, the notion of what might be regarded as an immediate danger of physical injury continues to grow.

These definitions are both far too broad, and far too vague.

Share this entry

It’s Not Just the FISA Court, It’s the Game of Surveillance Whack-a-Mole

In response to this post from Chelsea Manning, the other day I did the first in what seems to have become a series of posts arguing that we should eliminate the FISA Court, but that the question is not simple. In that post, I laid out the tools the FISC has used, with varying degrees of success, in reining in Executive branch spying, especially in times of abuse.

In this post, I want to lay out how reining in surveillance isn’t just about whether the secret approval of warrants and orders would be better done by the FISC or a district court. It’s about whack-a-mole.

That’s because, right now, there are four ways the government gives itself legal cover for expansive surveillance:

  • FISC, increasingly including programs
  • EO 12333, including SPCMA
  • Magistrate warrants and orders without proper briefing
  • Administrative orders and/or voluntary cooperation

FISA Court

The government uses the FISA court to get individualized orders for surveillance in this country and, to a less clear extent, surveillance of Americans overseas. That’s the old-fashioned stuff that could be done by a district court. But it’s also one point where egregious source information — be it a foreign partner using dubious spying techniques, or, as John Brennan admitted in his confirmation hearing, torture — gets hidden. No defendant has ever been able to challenge the basis for the FISA warrant used against them, which is clearly not what Congress said it intended in passing FISA. But given that’s the case, it means a lot of prosecutions that might not pass constitutional muster, because of that egregious source information, get a virgin rebirth in the FISC.

In addition, starting 2004, the government started using the FISA Court to coerce corporations to continue domestic collection programs they had previously done voluntarily. As I noted, while I think the FISC’s oversight of these programs has been mixed, the FISC has forced the government to hew closer (though not at) the law.

EO 12333, including SPCMA

The executive branch considers FISA just a subset of EO 12333, the Reagan Executive Order governing the intelligence community — a carve out of collection requiring more stringent rules. At times, the Intelligence Community have operated as if EO 12333 is the only set of rules they need to follow — and they’ve even secretly rewritten it at least once to change the rules. The government will always assert the right to conduct spying under EO 12333 if it has a technical means to bypass that carve out. That’s what the Bush Administration claimed Stellar Wind operated under. And at precisely the time the FISC was imposing limits on the Internet dragnet, the Executive Brach was authorizing analysis of Americans’ Internet metadata collected overseas under SPCMA.

EO 12333 derived data does get used against defendants in the US, though it appears to be laundered through the FISC and/or parallel constructed, so defendants never get the opportunity to challenge this collection.

Magistrate warrants and orders

Even when the government goes to a Title III court — usually a magistrate judge — to get an order or warrant for surveillance, that surveillance often escapes real scrutiny. We’ve seen this happen with Stingrays and other location collection, as well as FBI hacking; in those cases, the government often didn’t fully brief magistrates about what they’re approving, so the judges didn’t consider the constitutional implications of it. There are exceptions, however (James Orenstein, the judge letting Apple challenge the use of an All Writs Act to force it to unlock a phone, is a notable one), and that has provided periodic checks on collection that should require more scrutiny, as well as public notice of those methods. That’s how, a decade after magistrates first started to question the collection of location data using orders, we’re finally getting circuit courts to review the issue. Significantly, these more exotic spying techniques are often repurposed foreign intelligence methods, meaning you’ll have magistrates and other TIII judges weighing in on surveillance techniques being used in parallel programs under FISA. At least in the case of Internet data, that may even result in a higher standard of scrutiny and minimization being applied to the FISA collection than the criminal investigation collection.

Administrative orders and/or voluntary cooperation

Up until 2006, telecoms willing turned over metadata on Americans’ calls to the government under Stellar Wind. Under Hemisphere, AT&T provides the government call record information — including results of location-based analysis, on all the calls that used its networks, not just AT&T customers — sometimes without an order. For months after Congress was starting to find a way to rein in the NSA phone dragnet with USA Freedom Act, the DEA continued to operate its own dragnet of international calls that operated entirely on administrative orders. Under CISA, the government will obtain and disseminate information on cybersecurity threats that it wouldn’t be able to do under upstream 702 collection; no judge will review that collection. Until 2009, the government was using NSLs to get all the information an ISP had on a user or website, including traffic information. AT&T still provides enhanced information, including the call records of friends and family co-subscribers and (less often than in the past) communities of interest.

These six examples make it clear that, even with Americans, even entirely within the US, the government conducts a lot of spying via administrative orders and/or voluntary cooperation. It’s not clear this surveillance had any but internal agency oversight, and what is known about these programs (the onsite collaboration that was probably one precursor to Hemisphere, the early NSL usage) makes it clear there have been significant abuses. Moreover, a number of these programs represent individual (the times when FBI used an NSL to get something the FISC had repeatedly refused to authorize under a Section 215 order) or programmatic collection (I suspect, CISA) that couldn’t be approved under the auspices of the FISC.

All of which is to say the question of what to do to bring better oversight over expansive surveillance is not limited to the short-comings of the FISC.  It also must contend with the way the government tends to move collection programs when one method proves less than optimal. Where technologically possible, it has moved spying offshore and conducted it under EO 12333. Where it could pay or otherwise bribe and legally shield providers, it moved to voluntary collection. Where it needed to use traditional courts, it often just obfuscated about what it was doing. The primary limits here are not legal, except insofar as legal niceties and the very remote possibility of transparency raise corporate partner concerns.

We need to fix or eliminate the FISC. But we need to do so while staying ahead of the game of whack-a-mole.

Share this entry