Legal Analysis of OmniCISA Reinforces Cause for Concern

Among all the commentaries about CISA published before its passage, only one I know of (aside from my non-lawyer take here) dealt with what the bill did legally: this Jennifer Granick post explaining how OmniCISA will “stake out a category of ISP monitoring that the FCC and FTC can’t touch, regardless of its privacy impact on Americans,” thereby undercutting recent efforts to increase online privacy.

Since the bill passed into law, however, two lawyers have written really helpful detailed posts on what it does: Fourth Amendment scholar Orin Kerr and former NSA lawyer Susan Hennessey.

As Kerr explains, existing law had permitted Internet operators to surveil their own networks for narrowly tailored upkeep and intrusion purposes. OmniCISA broadened that to permit a provider to monitor (or have a third party monitor) both the network and traffic for a cybersecurity purpose.

[T]he right to monitor appears to extend to “cybersecurity purposes” generally, not just for the protection of the network operator’s own interests.  And relatedly, the right to monitor includes scanning and acquiring data that is merely transiting the system, which means that the network operator can monitor (or have someone else monitor) for cybersecurity purposes even if the operator isn’t worried about his own part of the network being the victim. Note the difference between this and the provider exception. The provider exception is about protecting the provider’s own network. If I’m reading the language here correctly, this is a broader legal privilege to monitor for cybersecurity threats.

It also permits such monitoring for insider threats.

[T]he Cyber Act may give network operators broad monitoring powers on their own networks to catch not only hackers but also insiders trying to take information from the network.

This accords with Hennessey’s take (and of course, having recently worked at NSA, she knows what they were trying to do). Importantly, she claims providers need to surveil content to take “responsible cybersecurity measures.”

Effective cybersecurity includes network monitoring, scanning, and deep-packet inspection—and yes, that includes contents of communications—in order to detect malicious activity.

In spite of the fact that Hennessey explicitly responded to Granick’s post, and Granick linked a letter from security experts describing the limits of what was really necessary for monitoring networks, Hennessey doesn’t engage in those terms to explain why corporations need to spy on their customers’ content to take responsible cybersecurity measures. It may be as simple as needing to search the contents of packets for known hackers’ signatures, or it may relate to surveilling IP theft or it may extend to reading the content of emails; those are fairly different degrees of electronic surveillance, all of which might be permitted by this law. But credit Hennessey for making clear what CISA boosters in Congress tried so assiduously to hide: this is about warrantless surveillance of content.

Hennessey lays out why corporations need a new law to permit them to spy on their users’ content, suggesting they used to rely on user agreements to obtain permission, but pointing to several recent court decisions that found user agreements did not amount to implied consent for such monitoring.

If either party to a communication consents to its interception, there is no violation under ECPA, “unless such communication is intercepted for the purpose of committing any criminal or tortious act.” 18 USC 2511(2)(d). Consent may be express or implied but, in essence, authorized users must be made aware of and manifest agreement to the interception.

At first glance, obtaining effective consent from authorized users presents a simple and attractive avenue for companies and cyber security providers to conduct monitoring without violating ECPA. User agreements can incorporate notification that communications may be monitored for purposes of network security. However, the ambiguities of ECPA have resulted in real and perceived limitations on the ability to obtain legally-effective consent.

Rapidly evolving case law generates significant uncertainty regarding the scope of consent as it relates to electronic communications monitoring conducted by service providers. In Campbell v. Facebook, a court for the Northern District of California denied Facebook’s motion to dismiss charges under ECPA, rejecting the claim that Facebook had obtained user consent. Despite lengthy user agreements included in Facebook’s “Statement of Rights and Responsibilities” and “Data Use Policy,” the court determined that consent obtained “with respect to the processing and sending of messages does not necessarily constitute consent to … the scanning of message content for use in targeted advertising.” Likewise in ln re Google Inc. Gmail Litigation, the same district determined that Google did not obtain adequate consent for the scanning of emails, though in that case, Google’s conduct fell within the “ordinary course of business” definition and thus did not constitute interception for the purposes of ECPA.

Here, and in other instances, courts have determined that companies which are highly sophisticated actors in the field have failed to meet the bar for effective consent despite good faith efforts to comply.

Hennssey’s focus on cases affecting Facebook and, especially, Google provide a pretty clear idea why those and other tech companies were pretending to oppose CISA without effectively doing so (Google’s Eric Schmidt had said such a law was necessary, but he wasn’t sure if this law was what was needed).

Hennessey goes on to extend these concerns to third party permission (that is, contractors who might monitor another company’s network, which Kerr also noted). Perhaps most telling is her discussion of  those who don’t count as electronic communications service providers.

Importantly, a large number of private entities require network security monitoring but are not themselves electronic communication service providers. For those entities that do qualify as service providers, it is not unlawful to monitor communications while engaged in activity that is a “necessary incident to” the provision of service or in order to protect the “rights or property” of the provider. But this exception is narrowly construed. In general, it permits providers the right “to intercept and monitor [communications] placed over their facilities in order to combat fraud and theft of service.” U.S. v. Villanueva, 32 F. Supp. 2d 635, 639 (S.D.N.Y. 1998). In practice, the exception does not allow for unlimited or widespread monitoring nor does it, standing alone, expressly permit the provision of data collected under this authority to the government or third parties.

Note how she assumes non-ECSPs would need to conduct “unlimited” monitoring and sharing with the government and third parties. That goes far beyond her claims about “responsible cybersecurity measures,” without any discussion of how such unlimited monitoring protects privacy (which is her larger claim).

Curiously, Hennessey entirely ignores what Kerr examines (and finds less dangerous than tech companies’ statements indicated): counter–er, um, defensive measures, which tech companies had worried would damage their infrastructure. As I noted, Richard Burr went out of his way to prevent Congress from getting reporting on whether that happened, which suggests it’s a real concern. Hennessey also ignores something that totally undermines her claim this is about “responsible cybersecurity measures” — the regulatory immunity that guts the tools the federal government currently uses to require corporations to take such measures. She also doesn’t explain why OmniCISA couldn’t have been done with the same kind of protections envisioned for “domestic security” surveillance under Keith and FISA, which is clearly what CISA is: notably, court review (I have suggested it is likely that FISC refused to permit this kind of surveillance).

I am grateful for Hennessey’s candor in laying out the details that a functional democracy would have laid out before eliminating the warrant requirement for some kinds of domestic wiretapping.

But it’s also worth noting that, even if you concede that permitting corporations such unfettered monitoring of their customers, even if you assume that the related info-sharing is anywhere near the most urgent thing we can do to prevent network intrusions, OmniCISA does far more than what Hennessey lays out as necessary, much of which is designed to shield all this spying, and the corporations that take part in it, from real review.

Hennessey ends her post by suggesting those of us who are concerned about OmniCISA’s broad language are ignoring limitations within it.

Despite vague allegations from critics that “cybersecurity purpose” could be read to be all-encompassing, the various definitions and limitations within the act work to create a limited set of permissible activities.

But even if that were true, it’d be meaningless given a set-up that would subject this surveillance only to Inspectors General whose past very diligent efforts to fix abuses have failed. Not even Congress will get key information — such as how often this surveillance leads to a criminal investigation or how many times “defensive measures” break the Internet — it needs to enforce what few limitations there are in this scheme.

All of which is to say that people with far more expertise than I have are reviewing this law, and their reviews only serve to confirm my earlier concerns.

The Heroic IRS Agent Story Should Raise More Questions about Silk Road Investigation

“In these technical investigations, people think they are too good to do the stupid old-school stuff. But I’m like, ‘Well, that stuff still works.’ ”

The NYT got this and many other direct quotes from IRS agent Gary Alford for a complimentary profile of him that ran on Christmas day. According to the story, Alford IDed Ross Ulbricht as a possible suspect for the Dread Pirate Roberts — the operator of the Dark Web site Silk Road — in early June 2013, but it took until September for Alford to get the prosecutor and DEA and FBI Agents working the case to listen to him. The profile claims Alford’s tip was “crucial,” though a typo suggests NYT editors couldn’t decide whether it was the crucial tip or just crucial.

In his case, though, the information he had was the crucial [sic] to solving one of the most vexing criminal cases of the last few years.

On its face, the story (and Alford’s quote) suggests the FBI is so entranced with its hacking ability that it has neglected very, very basic investigative approaches like Google searches. Indeed, if the story is true, it serves as proof that encryption and anonymity don’t thwart FBI investigations as much as Jim Comey would like us to believe when he argues the Bureau needs to back door all our communications.

But I don’t think the story tells the complete truth about the Silk Road investigation. I say that, first of all, because of the timing of Alford’s efforts to get others to further investigate Ulbricht. As noted, the story describes Alford IDing Ulbricht as a potential suspect in early June 2013, after which he put Ulbricht’s name in a DEA database of potential suspects, which presumably should have alerted anyone else on the team that US citizen Ross Ulbricht was a potential suspect in the investigation.

Mr. Alford’s preferred tool was Google. He used the advanced search option to look for material posted within specific date ranges. That brought him, during the last weekend of May 2013, to a chat room posting made just before Silk Road had gone online, in early 2011, by someone with the screen name “altoid.”

“Has anyone seen Silk Road yet?” altoid asked. “It’s kind of like an anonymous Amazon.com.”

The early date of the posting suggested that altoid might have inside knowledge about Silk Road.

During the first weekend of June 2013, Mr. Alford went through everything altoid had written, the online equivalent of sifting through trash cans near the scene of a crime. Mr. Alford eventually turned up a message that altoid had apparently deleted — but that had been preserved in the response of another user.

In that post, altoid asked for some programming help and gave his email address: [email protected]. Doing a Google search for Ross Ulbricht, Mr. Alford found a young man from Texas who, just like Dread Pirate Roberts, admired the free-market economist Ludwig von Mises and the libertarian politician Ron Paul — the first of many striking parallels Mr. Alford discovered that weekend.

When Mr. Alford took his findings to his supervisors and failed to generate any interest, he initially assumed that other agents had already found Mr. Ulbricht and ruled him out.

But he continued accumulating evidence, which emboldened Mr. Alford to put Mr. Ulbricht’s name on the D.E.A. database of potential suspects, next to the aliases altoid and Dread Pirate Roberts.

At the same time, though, Mr. Alford realized that he was not being told by the prosecutors about other significant developments in the case — a reminder, to Mr. Alford, of the lower status that the I.R.S. had in the eyes of other agencies. And when Mr. Alford tried to get more resources to track down Mr. Ulbricht, he wasn’t able to get the surveillance and the subpoenas he wanted.

Alford went to the FBI and DOJ with Ulbricht’s ID in June 2013, but FBI and DOJ refused to issue even subpoenas, much less surveil Ulbricht.

But over the subsequent months, Alford continued to investigate. In “early September” he had a colleague do another search on Ulbricht, which revealed he had been interviewed by Homeland Security in July 2013 for obtaining fake IDs.

In early September, he asked a colleague to run another background check on Mr. Ulbricht, in case he had missed something.

The colleague typed in the name and immediately looked up from her computer: “Hey, there is a case on this guy from July.”

Agents with Homeland Security had seized a package with nine fake IDs at the Canadian border, addressed to Mr. Ulbricht’s apartment in San Francisco. When the agents visited the apartment in mid-July, Mr. Ulbricht answered the door, and the agents identified him as the face on the IDs, without having any idea of his potential links to Silk Road.

When Alford told prosecutor Serrin Turner of the connection (again, this is September 2013), the AUSA finally did his own search in yet another database, the story claims, only to discover Ulbricht lived in the immediate vicinity of where Dread Pirate Roberts was accessing Silk Road. And that led the Feds to bust Ulbricht.

I find the story — the claim that without Alford’s Google searches, FBI did not and would not have IDed Ulbricht — suspect for two reasons.

First, early June is the date that FBI Agent Christopher Tarbell’s declaration showed (but did not claim) FBI first hacked Silk Road. That early June date was itself suspect because Tarbell’s declaration really showed data from as early as February 2013 (which is, incidentally, when Alford was first assigned to the team). In other words, while it still seems likely FBI was always lying about when it hacked into Silk Road, the coincidence between when Alford says he went to DOJ and the FBI with Ulbricht’s ID and when the evidence they were willing to share with the defense claimed to have first gotten a lead on Silk Road is of interest. All the more so given that the FBI claimed it could legally hack the server because it did not yet know the server was run by an American, and so it treated the Iceland-based server as a foreigner for surveillance purposes.

One thing that means is that DOJ may not have wanted to file paperwork to surveil Ulbricht because admitting they had probable cause to suspect an American was running Silk Road would make their hack illegal (and/or would have required FBI to start treating Ulbricht as the primary target of the investigation; it seems FBI may have been trying to do something else with this investigation). By delaying the time when DOJ took notice of the fact that Silk Road was run by an American, they could continue to squat on Silk Road without explaining to a judge what they were doing there.

The other reason I find this so interesting is because several of the actions to which corrupt DEA agent Carl Force pled guilty — selling fake IDs and providing inside information — took place between June and September 2013, during the precise period when everyone was ignoring Alford’s evidence and the fact that he had entered Ulbricht’s name as a possible alias for the Dread Pirate Roberts into a DEA database. Of particular note, Force’s guilty plea only admitted to selling the fake IDs for 400 bitcoin, and provided comparatively few details about that action, but the original complaint against Force explained he had sold the IDs for 800 bitcoin but refunded Ulbricht 400 bitcoin because “the deal for the fraudulent identification documents allegedly fell through” [emphasis mine].

Were those fake IDs that Force sold Ulbricht the ones seized by Homeland Security and investigated in July 2013? Did the complaint say the deal “allegedly” fell through because it didn’t so much fall through as get thwarted? Did something — perhaps actions by Force — prevent other team members from tying that seizure to Ulbricht? Or did everyone know about it, but pretend not to, until Alford made them pay attention (perhaps with a communications trail that other Feds couldn’t suppress)? Was the ID sale part of the investigation, meant to ID Ulbricht’s identity and location, but Force covered it up?

In other words, given the record of Force’s actions, it seems more likely that at least some people on the investigative team already knew what Alford found in a Google search, but for both investigative (the illegal hack that FBI might have wanted to extend for other investigative reasons) and criminal (the money Force was making) reasons, no one wanted to admit that fact.

Now, I’m not questioning the truth of what Alford told the NYT. But even his story (which is corroborated by people “briefed on the investigation,” but only one person who actually attended any of the meetings for it; most of those people are silent about Alford’s claims) suggests there may be other explanations why no one acted on his tip, particularly given the fact that he appears to have been unable to do database searches himself and that they refused to do further investigation into Ulbricht. (I also wonder whether Alford’s role explains why the government had the IRS in San Francisco investigate Force and corrupt Secret Service Agent Shaun Bridges, rather than New York, where agents would have known these details.)

Indeed, I actually think this complimentary profile might have been a way for Alford to expose further cover-ups in the Silk Road investigation without seeming to do so for any but self-interested reasons. Bridges was sentenced on December 7. Ulbricht was originally supposed to have submitted his opening appellate brief — focusing on Fourth Amendment issues that may be implicated by these details — on December 11, but on December 2, the court extended that deadline until January 12.

I don’t know whether Ulbricht’s defense learned these details. I’m admittedly not familiar enough with the public record to know, though given the emphasis on Tarbell’s declaration as the explanation for how they discovered Ulbricht and the NYT’s assertion Alford’s role and the delay was “largely left out of the documents and proceedings that led to Mr. Ulbricht’s conviction and life sentence this year,” I don’t think it is public. But if they didn’t, then the fact that the investigative team went out of their way to avoid confirming Ulbricht’s readily accessible identity until at least three and probably seven months after they started hacking Silk Road, even while key team members were stealing money from the investigation, might provide important new details about the government’s actions.

And if Alford gets delayed credit for doing simple Google searches as a result, all the better!

If a Close US Ally Backdoored Juniper, Would NSA Tell Congress?

You may have heard that Juniper Networks announced what amounts to a backdoor in its virtual private networks products. Here’s Kim Zetter’s accessible intro of what security researchers have learned so far. And here’s some technical background from Matthew Green.

As Zetter summarizes, the short story is that some used weaknesses encouraged by NSA to backdoor the security product protecting a lot of American businesses.

They did this by exploiting weaknesses the NSA allegedly placed in a government-approved encryption algorithm known as Dual_EC, a pseudo-random number generator that Juniper uses to encrypt traffic passing through the VPN in its NetScreen firewalls. But in addition to these inherent weaknesses, the attackers also relied on a mistake Juniper apparently made in configuring the VPN encryption scheme in its NetScreen devices, according to Weinmann and other cryptographers who examined the issue. This made it possible for the culprits to pull off their attack.

As Green describes, the key events probably happened at least as early as 2007 and 2012 (contrary to the presumption of surveillance hawk Stewart Baker looking to scapegoat those calling for more security). Which means this can’t be a response to the Snowden document strongly suggesting the NSA had pushed those weaknesses in Dual_EC.

I find that particularly interesting, because it suggests whoever did this either used public discussions about the weakness of Dual_EC, dating to 2007, to identify and exploit this weakness, or figured out what (it is presumed) the NSA was up to. That suggests two likely culprits for what has been assumed to be a state actor behind this: Israel (because it knows so much about NSA from having partnered on things like StuxNet) or Russia (which was getting records on the FiveEyes’ SIGINT activities from its Canadian spy, Jeffrey Delisle).  The UK would be another obvious guess, except an Intercept article describing how NSA helped UK backdoor Juniper suggests they used another method.

Which leads me back to an interesting change I noted between CISA — the bill passed by the Senate back in October — and OmniCISA — the version passed last week as part of the omnibus funding bill. OmniCISA still required the Intelligence Community to provide a report on the most dangerous hacking threats, especially state actors, to the Intelligence Committees. But it eliminated a report for the Foreign Relations Committees on the same topic. I joked at the time that that was probably to protect Israel, because no one wants to admit that Israel spies and has greater ability to do so by hacking than other nation-states, especially because it surely learns our methods by partnering with us to hack Iran.

Whoever hacked Juniper, the whole incident offers a remarkable lesson in the dangers of backdoors. Even as FBI demands a backdoor into Apple’s products, it is investigating who used a prior US-sponsored backdoor to do their own spying.

So-Called Oversight in OmniCISA

I did a working thread of the surveillance portion of the version of CISA in the omnibus funding bill here. The short version: it is worse even than CISA was on most counts, although there are a few changes — such as swapping “person” in all the privacy guidelines to “individual” that will have interesting repercussions for non-biological persons.

As I said in that post, I’m going to do a closer look at the privacy provisions that didn’t get stripped from the bill; the biggest change, though, is to eliminate a broad biennial review by the Privacy and Civil Liberties Oversight Board entirely, replacing it with a very narrow assessment, by the Comptroller (?!) of whether the privacy scrub is working. Along with the prohibition on PCLOB accessing information from covert ops that got pulled in as part of the Intelligence Authorization incorporated into the bill, it’s clear the Omnibus as a whole aims to undercut PCLOB.

So here’s what counts as “oversight” in OmniCISA. Note, the “appropriate Federal agencies” are the agencies that automatically get information under the sharing system:

(A) The Department of Commerce

(B) The Department of Defense

(C) The Department of Energy

(D) The Department of Homeland Security

(E) The Department of Justice

(F) The Department of the Treasury

(G) The Office of the Director of National Intelligence

Report on Implementation

Timing: less than one year after passage

Completed by: heads of appropriate Federal agencies

This is basically a report on whether the information sharing bureaucracy is working to share information effectively. It totally blows off privacy questions and doesn’t require an independent assessment. This report includes:

(A) An evaluation of the effectiveness of real-time information sharing through the capability and process developed under section 105 (c), including any impediments to such real-time sharing.

(B) An assessment of whether cyber threat indicators or defensive measures have been properly classified and an accounting of the number of security clearances authorized by the Federal Government for the purpose of sharing cyber threat indicators or defensive measures with the private sector.

(C) The number of cyber threat indicators or defensive measures received through the capability and process developed under section 105(c).

(D) A list of Federal entities that have received cyber threat indicators or defensive measures under this title.

Bienniel Report on Compliance

Timing: At least every two years

Completed by: Inspectors General of appropriate Federal agencies, plus Intelligence Community and Council of Inspectors General on Financial Oversight

This report assesses both the same efficacy questions reviewed within a year and privacy protections. But it swaps out a general requirement that the IGs assess, “The degree to which such information may affect the privacy and civil liberties of specific persons,” with (D)(ii), below, which is tied to whether information is “related to a cybersecurity threat.” Since everything collected would be “related to” (collected because of some technical connection to) a cyberthreat, it basically undercuts the likelihood of a too-broad interpretation of “related to” undercutting privacy.

It includes:

(A) An assessment of the sufficiency of the policies, procedures, and guidelines relating to the sharing of cyber threat indicators within the Federal Government, including those policies, procedures, and guidelines relating to the removal of information not directly related to a cybersecurity threat that is personal information of a specific individual or information that identifies a specific individual.

(B) An assessment of whether cyber threat indicators or defensive measures have been properly classified and an accounting of the number of security clearances authorized by the Federal Government for the purpose of sharing cyber threat indicators or defensive measures with the private sector.

(C) A review of the actions taken by the Federal Government based on cyber threat indicators or defensive measures shared with the Federal Government under this title, including a review of the following:

(i) The appropriateness of subsequent uses and disseminations of cyber threat indicators or defensive measures.

(ii) Whether cyber threat indicators or defensive measures were shared in a timely and adequate manner with appropriate entities, or, if appropriate, were made publicly available.

(D) An assessment of the cyber threat indicators or defensive measures shared with the appropriate Federal entities under this title, including the following:

(i) The number of cyber threat indicators or defensive measures shared through the capability and process developed under section 105(c).

(ii) An assessment of any information not directly related to a cybersecurity threat that is personal information of a specific individual or information  identifying a specific individual and was shared by a non-Federal government entity with the Federal government in contravention of this title, or was shared within the Federal Government in contravention of the guidelines required by this title, including a description of any significant violation of this title.

(iii) The number of times, according to the Attorney General, that information shared under this title was used by a Federal entity to prosecute an offense listed in section 105(d)(5)(A).

(iv) A quantitative and qualitative assessment of the effect of the sharing of cyber threat indicators or defensive measures with the Federal Government on privacy and civil liberties of specific individuals, including the number of notices that were issued with respect to a failure to remove information not directly related to a cybersecurity threat that was personal information of a specific individual or information that identified a specific individual in accordance with the procedures required by section 105(b)(3)(E).

(v) The adequacy of any steps taken by the Federal Government to reduce any adverse effect from activities carried out under this title on the privacy and civil liberties of United States persons.

(E) An assessment of the sharing of cyber threat indicators or defensive measures among Federal entities to identify inappropriate barriers to sharing information.

Independent Report on Removal of Personal Information

Timing: Not later than 3 years after passage

Completed by: Comptroller General

This review will measure “the actions taken by the Federal Government to remove personal information from cyber threat indicators or defensive measures pursuant to this title,” assessing whether the policies and procedures established by the bill are sufficient to address concerns about privacy and civil liberties.

 

Working Thread, Cybersecurity Act

As I’ve been reporting, Paul Ryan added a version of the Cybersecurity Information Sharing Act to the omnibus. It starts on page 1728. This will be my working thread.

(1745) They’ve changed what gets stripped from “person” to “individual,” thereby not requiring that corporate names get stripped.

(1747) The bill takes out CISA’s requirement of getting authorization before using an indicator for law enforcement.

(1753) The language on ensuring there are audit capabilities (but not that they’re used) takes out this language, which was in CISA.

C) consistent with this title, any other applicable provisions of law, and the fair information practice principles set forth in appendix A of the document entitled “National Strategy for Trusted Identities in Cyberspace” and published by the President in April, 2011, govern the retention, use, and dissemination by the Federal Government of cyber threat indicators shared with the Federal Government under this title, including the extent, if any, to which such cyber threat indicators may be used by the Federal Government; and

(1754) This section replaced an “or” in CISA with the underlined “and,” which I think sharply constrains the list of stuff that shouldn’t be shared. (It also replaces “person” with “individual” as consistent with other changes.)

(i) Identification of types of information that would qualify as a cyber threat indicator under this title that would be unlikely to include information that—

(I) is not directly related to a cybersecurity threat; and

(II) is personal information of a specific individual or information that identifies a specific individual.

(1755) OmnibusCISA requires the AG to make both the interim and final privacy guidelines public; CISA had only made interim ones public.

jointly issue and make publicly available final guidelines

(1760) The clause noting that other info sharing is still permissible adds the underlined language.

(i) reporting of known or suspected criminal activity, by a non-Federal entity to any other non-Federal entity or a Federal entity, including cyber threat indicators or defensive measures shared with a Federal entity in furtherance of opening a Federal law enforcement investigation;

(1761-2) The bill basically gives DHS 90 days (60, really) to set up its portal before the President can declare the need to set up a competing one. This also involves slightly different timing on notice to Congress of whether DHS manages to pull it together in 90 days.

IN GENERAL.—At any time after certification is submitted under subparagraph (A), the President may designate an appropriate Federal entity, other than the Department of Defense (including the National Security Agency), to develop and implement a capability and process as described in paragraph (1) in addition to the capability and process developed under such paragraph by the Secretary of Homeland Security, if, not fewer than 30 days before making such designation, the President submits to Congress a certification and explanation that—

(I) such designation is necessary to ensure that full, effective, and secure operation of a capability and process for the Federal Government to receive from any non-Federal entity cyber threat indicators or defensive measures under this title;

 (1766) The OmniCISA is slightly better on threat of death sharing as it must be specific.

(iii) the purpose of responding to, or otherwise preventing or mitigating, a specific threat of death, a specific threat of serious bodily harm, or a specific threat of serious economic harm, including a terrorist act or a use of a weapon of mass destruction;

(1768-9) Wow. The regulatory exception is even bigger than it was under CISA. Here’s what CISA said (underline added in both):

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this title shall not be directly used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any entity, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

And here’s what OmniCISA says:

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this title shall not be  used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any non-Federal entity or any activities taken by a non-Federal entity pursuant to mandatory standards, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

(1771) The Rule of Construction is more permissive in OmniCISA, too. Compare CISA:

(c) Construction.—Nothing in this section shall be construed—

(1) to require dismissal of a cause of action against an entity that has engaged in gross negligence or willful misconduct in the course of conducting activities authorized by this title; or

With OmniCISA.

CONSTRUCTION.—Nothing in this title shall be construed—

(1) to create—

(A) a duty to share a cyber threat indicator or defensive measure; or

(B) a duty to warn or act based on the receipt of a cyber threat indicator or defensive measure; or

Whereas CISA still permitted the government to pursue a company for gross negligence, OmniCISA instead makes clear that companies can ignore cyber information they get shared from the government.

(1771) I’m going to circle back and compare the various oversight reporting from all four bills in more detail. But the big takeaway is that they’ve stripped a PCLOB review from all 3 of the underlying bills.

(1782) I’m not sure what this new language does. A lawyer who works in this area thinks it protects Brady obligations. I hope he’s right and it’s not, instead, a way to eat limits on the use for prosecution.

(n) CRIMINAL PROSECUTION.—Nothing in this title shall be construed to prevent the disclosure of a cyber threat indicator or defensive measure shared under this title in a case of criminal prosecution, when an applicable provision of Federal, State, tribal, or local law requires disclosure in such case.

(1783) In a (long-overdue) report on how to deal with hacking, OmniCISA takes out a report on this topic specifically done for the Foreign Relations Committee, suggesting this information will remain classified and potentially unavailable to the committees. I guess they have to hide Israel’s spying.

(2) A list and an assessment of the countries and nonstate actors that are the primary threats of carrying out a cybersecurity threat, including a cyber attack, theft, or data breach, against the United States and which threaten the United States national security, economy, and intellectual property.

(1785) This is the sunset language. It doesn’t seem to sunset anything.

(a) IN GENERAL.—Except as provided in subsection 3 (b), this title and the amendments made by this title shall be effective during the period beginning on the date of the enactment of this Act and ending on September 30, 2025.

“Encryption” Is Just Intel Code for “Failure to Achieve Omniscience”

After receiving a briefing on the San Bernardino attack, Richard Burr went out and made two contradictory claims. First, Burr — and or other sources for The Hill — said that there was no evidence the Tashfeen Malik and Syed Rizwan Farook used encryption.

Lawmakers on Thursday said there was no evidence yet that the two suspected shooters used encryption to hide from authorities in the lead-up to last week’s San Bernardino, Calif., terror attack that killed 14 people.

“We don’t know whether it played a part in this attack,” Senate Intelligence Committee Chairman Richard Burr (R-N.C.) told reporters following a closed-door briefing with federal officials on the shootings.

That’s consistent with what we know so far. After all, a husband and wife wouldn’t need to — or have a way of — encrypting their communications with each other, as it would be mostly face-to-face. The fact that they tried to destroy their devices (and apparently got rid of a still undiscovered hard drive) suggests they weren’t protecting that via encryption, but rather via physical destruction. That doesn’t rule out using both, but the FBI would presumably know if the devices they’re reconstructed were encrypted.

So it makes sense that the San Bernardino attacks did not use encryption.

But then later in the same discussion with reporters, Burr suggested Malik and Farook must have used encryption because the IC didn’t know about their attack.

Burr suggested it might have even played a role in the accused San Bernardino shooters — Tashfeen Malik and Syed Rizwan Farook — going unnoticed for years, despite the FBI saying they had been radicalized for some time.

“Any time you glean less information at the beginning, clearly encryption probably played a role in it,” he said. “And there were a lot of conversations that went on between these two individuals before [Malik] came to the United States that you would love to have some insight to other than after an attack took place.”

This is a remarkable comment!

After all, the FBI and NSA don’t even read all the conversations of foreigners, as Malik would still have legally been, that they can. Indeed, if these conversations were in Arabic or Urdu, the IC would only have had them translated if there were some reason to find them interesting. And even in spite of the pair’s early shooting training, it’s not apparent they had extensive conversations, particularly not online, to guide that training.

Those details would make it likely that the IC would have had no reason to be interested. To say nothing of the fact that ultimately “radicalization” is a state of mind, and thus far, NSA doesn’t have a way to decrypt thoughts.

But this is the second attack in a row, with Paris, where Burr and others have suggested that their lack of foreknowledge of the attack makes it probable the planners used encryption. Burr doesn’t even seem to be considering a number of other things, such as good operational security, languages, and metadata failures might lead the IC to miss warning signs, even assuming they’re collecting everything (there should have been no legal limits to their ability to collect on Malik).

We’re not having a debate about encryption anymore. We’re debating making the Internet less secure to excuse the IC’s less-than-perfect-omniscience.

Dianne Feinstein’s Encrypted Playstation Nightmare

I’ve complained about Dianne Feinstein’s inconsistency on cybersecurity, specifically as it relates to Sony, before. The week before the attack on Paris, cybersecurity was the biggest threat, according to her. And Sony was one of the top targets, both of criminal identity theft and — if you believe the Administration — nation-states like North Korea. If you believe that, you believe that Sony should have the ability to use encryption to protect its business and users. But, in the wake of Paris and Belgian Interior Minister Jan Jambon’s claim that terrorists are using Playstations, Feinstein changed her tune, arguing serial hacking target Sony should not be able to encrypt its systems to protect users.

Her concerns took a bizarre new twist in an FBI oversight hearing today. Now, she’s concerned that if a predator decides to target her grandkids while they’re playing on a Playstation, that will be encrypted.

I have concern about a Playstation which my grandchildren might use and a predator getting on the other end, talking to them, and it’s all encrypted.

Someone needs to explain to DiFi that her grandkids are probably at greater risk from predators hacking Sony to get personal information about them to then use that to abduct or whatever them.

Sony’s the perfect example of how security hawks like Feinstein need to choose: either her grandkids face risks because Sony doesn’t encrypt its systems, or they do because it does.

The former risk is likely the much greater risk.

In One of His First Major Legislative Acts, Paul Ryan Trying to Deputize Comcast to Narc You Out to the Feds

Screen Shot 2015-12-07 at 7.53.31 PMAs the Hill reports, Speaker Paul Ryan is preparing to add a worsened version of the Cybersecurity Information Sharing Act to the omnibus budget bill, bypassing the jurisdictional interests of Homeland Security Chair Mike McCaul in order to push through the most privacy-invasive version of the bill.

But several people tracking the negotiations believe McCaul is under significant pressure from House Speaker Paul Ryan (R-Wis.) and other congressional leaders to not oppose the compromise text.

They said lawmakers are aiming to vote on the final cyber bill as part of an omnibus budget deal that is expected before the end of the year.

As I laid out in October, it appears CISA — even in the form that got voted out of the Senate — would serve as a domestic “upstream” spying authority, providing the government a way to spy domestically without a warrant.

CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.

We recently got an idea of how this might work. Comcast is basically hacking its own users to find out if they’re downloading copyrighted material.

[Comcast] has been accused of tapping into unencrypted browser sessions and displaying warnings that accuse the user of infringing copyrighted material — such as sharing movies or downloading from a file-sharing site.

That could put users at risk, says the developer who discovered it.

Jarred Sumner, a San Francisco, Calif.-based developer who published the alert banner’s code on his GitHub page, told ZDNet in an email that this could cause major privacy problems.

Sumner explained that Comcast injects the code into a user’s browser as they are browsing the web, performing a so-called “man-in-the-middle” attack. (Comcast has been known to alert users when they have surpassed their data caps.) This means Comcast intercepts the traffic between a user’s computer and their servers, instead of installing software on the user’s computer.

[snip]

“This probably means that Comcast is using [deep packet inspection] on subscriber’s internet and/or proxying subscriber internet when they want to send messages to subscribers,” he said. “That would let Comcast modify unencrypted traffic in both directions.”

In other words, Comcast is already doing the same kind of deep packet inspection of its users’ unencrypted activity as the telecoms use in upstream collection for the NSA. Under CISA, they’d be permitted — and Comcast sure seems willing — to do such searches for the Feds.

Some methods of downloading copyrighted content might already be considered a cyberthreat indicator that Comcast could report directly to the Federal government (and possibly, under this latest version, directly to the FBI). And there are reports that the new version will adopt an expanded list of crimes, to include the Computer Fraud and Abuse Act.

In other words, it’s really easy to see how under this version of CISA, the government would ask Comcast to hack you to find out if you’re doing one of the long list of things considered hacking — a CFAA violation — by the Feds.

How’s that for Paul Ryan’s idea of conservatism, putting the government right inside your Internet router as one of his first major legislative acts?

Internet of Things: Now, with ‘Breachable’ Kids Connect and ‘Hackable’ Barbie

HelloBarbie

[graphic: Hello Barbie via Mattel’s website]

The Internet of Things (IoT) already includes refrigerators, televisions, slow cookers, automobiles, you name it. Most of these items have already experienced security problems, whether personal information leaks, or manipulative hacking.

Now the IoT includes toys — and wow, what a surprise! They’re riddled with privacy and security problems, too.

Like VTech’s privacy breach, exposing data for more than 6 million children and parents including facial photos and chat logs through its Kids Connect technology. The company’s privacy policy (last archived copy) indicated communications would be encrypted, but the encryption proved whisper thin.

Or Mattel’s Hello Barbie, its Wi-Fi enabled communications at risk for hacking and unauthorized surveillance. The flaws include this doll’s ability to connect to any Wi-Fi network named “Barbie” — it was absolutely brain-dead easy to spoof and begin snooping on anything this doll could “hear.”

It’s amazing these manufacturers ever thought these toys were appropriate for the marketplace, given their target audience. In VTech’s case, it appears to be nearly all ages (its Android app on Google Play is unrated), and in the case of Mattel’s Hello Barbie, it’s primarily girls ages 6-15.

These devices are especially iffy since they tippy-toe along the edge of the Children’s Online Privacy Protection Act of 1998 (a.k.a. COPPA, 15 U.S.C. 6501–6505).

Parents share much of the blame, too. Most have no clue what or how federal law covers children’s internet use under COPPA, or requirements under the Children’s Internet Protection Act (a.k.a. CIPA, 47 CFR 54.520). Nor do the parents who buy these devices appear to grasp this basic fact: any network-mediated or Wi-Fi toy, apart from the obvious cellphone/tablet/PC, is at implicit risk for leaking personal data or hackable. How are these devices risking exposure of children’s data, including their activities and location, age-appropriate toys?

This piece at Computerworld has a few helpful suggestions. In my opinion, the IoT doesn’t belong in your kids’ toybox until your kids are old enough to understand and manage personal digital information security to use the internet safely.

Frankly, many parents aren’t ready for safe internet use.

Dianne Feinstein Inadvertently Calls to Expose America’s Critical Infrastructure to Hackers

For days now, surveillance hawks have been complaining that terrorists probably used encryption in their attack on Paris last Friday. That, in spite of the news that authorities used a phone one of the attackers threw in a trash can to identify a hideout in St. Denis (this phone in fact might have been encrypted and brute force decrypted, but given the absence of such a claim and the quick turnaround on it, most people have assumed both it and the pre-attack chats on it were not encrypted).

I suspect we’ll learn attackers did use encryption (and a great deal of operational security that has nothing to do with encryption) at some point in planning their attack — though the entire network appears to have been visible through metadata and other intelligence. Thus far, however, there’s only one way we know of that the terrorists used encryption leading up to the attack: when one of them paid for things like a hotel online, the processing of his credit card (which was in his own name) presumably took place over HTTPS (hat tip to William Ockham for first making that observation). So if we’re going to blindly demand we prohibit the encryption the attackers used, we’re going to commit ourselves to far far more hacking of online financial transactions.

I’m more interested in the concerns about terrorists’ claimed use of PlayStation 4. Three days before the attack, Belgium’s Interior Minister, said all countries were having problem with PlayStation 4s, which led to a frenzy mistakenly claiming the Paris terrorists had used it (there’s far more reason to believe they used Telegram).

One of those alternatives was highlighted on Nov. 11, when Belgium’s federal home affairs minister, Jan Jambon, said that a PlayStation 4 (PS4) console could be used by ISIS to communicate with their operatives abroad.

“PlayStation 4 is even more difficult to keep track of than WhatsApp,” said Jambon, referencing to the secure messaging platform.

Earlier this year, Reuters reported that a 14-year-old boy from Austria was sentenced to a two-year jail term after he downloaded instructions on bomb-building onto his Playstation games console, and was in contact with ISIS.

It remains unclear, however, how ISIS would have used PS4s, though options range from the relatively direct methods of sending messages to players or voice-chatting, to more elaborate methods cooked up by those who play games regularly. Players, for instance, can use their weapons during a game to send a spray of bullets onto a wall, spelling out whole sentences to each other.

This has DiFi complaining that Playstation is encrypted.

Even Playstation is encrypted. It’s very hard to get the data you need because it’s encrypted

Thus far, it’s not actually clear most communications on Playstation are encrypted (though players may be able to pass encrypted objects about); most people I’ve asked think the communications are not encrypted, though Sony isn’t telling. What is likely is that there’s not an easy way to collect metadata tracking the communications within games, which would make it hard to collect on whether or not some parts of the communications data are encrypted.

But at least one kind of data on Playstations — probably two — is encrypted: Credit cards and (probably) user data. That’s because 4 years ago, Playstation got badly hacked.

“The entire credit card table was encrypted and we have no evidence that credit card data was taken,” said Sony.

This is the slimmest amount of good news for PlayStation Network users, but it alone raises very serious concerns, since Sony has yet to provide any details on what sort of encryption has been used to protect that credit card information.

As a result, PlayStation Network users have absolutely no idea how safe their credit card information may be.

But the bad news keeps rolling in:

“The personal data table, which is a separate data set, was not encrypted,” Sony notes, “but was, of course, behind a very sophisticated security system that was breached in a malicious attack.”

A very sophisticated security system that ultimately failed, making it useless.

Why Sony failed to encrypt user account data is a question that security experts have already begun to ask. Along with politicians both in the United States and abroad.

Chances are Sony’s not going to have an answer that’s going to please anyone.

After one in a series of really embarrassing hacks, I assume Sony has locked things down more since. Three years after that Playstation hack, of course, Sony’s movie studio would be declared critical infrastructure after it also got hacked.

Here’s the thing: Sony is the kind of serially negligent company that we need to embrace good security if the US is going to keep itself secure. We should be saying, “Encrypt away, Sony! Please keep yourself safe because hackers love to hack you and they’ve had spectacular success doing so! Jolly good!”

But we can’t, at the same time, be complaining that Sony offers some level of encryption as if that makes the company a material supporter of terrorism. Sony is a perfect example of how you can’t have it both ways, secure against hackers but not against wiretappers.

Amid the uproar about terrorists maybe using encryption, the ways they may have — to secure online financial transactions and game player data — should be a warning about condemning encryption broadly.

Because next week, when hackers attack us, we’ll be wishing our companies had better encryption to keep us safe.

image_print