Legal Analysis of OmniCISA Reinforces Cause for Concern

Among all the commentaries about CISA published before its passage, only one I know of (aside from my non-lawyer take here) dealt with what the bill did legally: this Jennifer Granick post explaining how OmniCISA will “stake out a category of ISP monitoring that the FCC and FTC can’t touch, regardless of its privacy impact on Americans,” thereby undercutting recent efforts to increase online privacy.

Since the bill passed into law, however, two lawyers have written really helpful detailed posts on what it does: Fourth Amendment scholar Orin Kerr and former NSA lawyer Susan Hennessey.

As Kerr explains, existing law had permitted Internet operators to surveil their own networks for narrowly tailored upkeep and intrusion purposes. OmniCISA broadened that to permit a provider to monitor (or have a third party monitor) both the network and traffic for a cybersecurity purpose.

[T]he right to monitor appears to extend to “cybersecurity purposes” generally, not just for the protection of the network operator’s own interests.  And relatedly, the right to monitor includes scanning and acquiring data that is merely transiting the system, which means that the network operator can monitor (or have someone else monitor) for cybersecurity purposes even if the operator isn’t worried about his own part of the network being the victim. Note the difference between this and the provider exception. The provider exception is about protecting the provider’s own network. If I’m reading the language here correctly, this is a broader legal privilege to monitor for cybersecurity threats.

It also permits such monitoring for insider threats.

[T]he Cyber Act may give network operators broad monitoring powers on their own networks to catch not only hackers but also insiders trying to take information from the network.

This accords with Hennessey’s take (and of course, having recently worked at NSA, she knows what they were trying to do). Importantly, she claims providers need to surveil content to take “responsible cybersecurity measures.”

Effective cybersecurity includes network monitoring, scanning, and deep-packet inspection—and yes, that includes contents of communications—in order to detect malicious activity.

In spite of the fact that Hennessey explicitly responded to Granick’s post, and Granick linked a letter from security experts describing the limits of what was really necessary for monitoring networks, Hennessey doesn’t engage in those terms to explain why corporations need to spy on their customers’ content to take responsible cybersecurity measures. It may be as simple as needing to search the contents of packets for known hackers’ signatures, or it may relate to surveilling IP theft or it may extend to reading the content of emails; those are fairly different degrees of electronic surveillance, all of which might be permitted by this law. But credit Hennessey for making clear what CISA boosters in Congress tried so assiduously to hide: this is about warrantless surveillance of content.

Hennessey lays out why corporations need a new law to permit them to spy on their users’ content, suggesting they used to rely on user agreements to obtain permission, but pointing to several recent court decisions that found user agreements did not amount to implied consent for such monitoring.

If either party to a communication consents to its interception, there is no violation under ECPA, “unless such communication is intercepted for the purpose of committing any criminal or tortious act.” 18 USC 2511(2)(d). Consent may be express or implied but, in essence, authorized users must be made aware of and manifest agreement to the interception.

At first glance, obtaining effective consent from authorized users presents a simple and attractive avenue for companies and cyber security providers to conduct monitoring without violating ECPA. User agreements can incorporate notification that communications may be monitored for purposes of network security. However, the ambiguities of ECPA have resulted in real and perceived limitations on the ability to obtain legally-effective consent.

Rapidly evolving case law generates significant uncertainty regarding the scope of consent as it relates to electronic communications monitoring conducted by service providers. In Campbell v. Facebook, a court for the Northern District of California denied Facebook’s motion to dismiss charges under ECPA, rejecting the claim that Facebook had obtained user consent. Despite lengthy user agreements included in Facebook’s “Statement of Rights and Responsibilities” and “Data Use Policy,” the court determined that consent obtained “with respect to the processing and sending of messages does not necessarily constitute consent to … the scanning of message content for use in targeted advertising.” Likewise in ln re Google Inc. Gmail Litigation, the same district determined that Google did not obtain adequate consent for the scanning of emails, though in that case, Google’s conduct fell within the “ordinary course of business” definition and thus did not constitute interception for the purposes of ECPA.

Here, and in other instances, courts have determined that companies which are highly sophisticated actors in the field have failed to meet the bar for effective consent despite good faith efforts to comply.

Hennssey’s focus on cases affecting Facebook and, especially, Google provide a pretty clear idea why those and other tech companies were pretending to oppose CISA without effectively doing so (Google’s Eric Schmidt had said such a law was necessary, but he wasn’t sure if this law was what was needed).

Hennessey goes on to extend these concerns to third party permission (that is, contractors who might monitor another company’s network, which Kerr also noted). Perhaps most telling is her discussion of  those who don’t count as electronic communications service providers.

Importantly, a large number of private entities require network security monitoring but are not themselves electronic communication service providers. For those entities that do qualify as service providers, it is not unlawful to monitor communications while engaged in activity that is a “necessary incident to” the provision of service or in order to protect the “rights or property” of the provider. But this exception is narrowly construed. In general, it permits providers the right “to intercept and monitor [communications] placed over their facilities in order to combat fraud and theft of service.” U.S. v. Villanueva, 32 F. Supp. 2d 635, 639 (S.D.N.Y. 1998). In practice, the exception does not allow for unlimited or widespread monitoring nor does it, standing alone, expressly permit the provision of data collected under this authority to the government or third parties.

Note how she assumes non-ECSPs would need to conduct “unlimited” monitoring and sharing with the government and third parties. That goes far beyond her claims about “responsible cybersecurity measures,” without any discussion of how such unlimited monitoring protects privacy (which is her larger claim).

Curiously, Hennessey entirely ignores what Kerr examines (and finds less dangerous than tech companies’ statements indicated): counter–er, um, defensive measures, which tech companies had worried would damage their infrastructure. As I noted, Richard Burr went out of his way to prevent Congress from getting reporting on whether that happened, which suggests it’s a real concern. Hennessey also ignores something that totally undermines her claim this is about “responsible cybersecurity measures” — the regulatory immunity that guts the tools the federal government currently uses to require corporations to take such measures. She also doesn’t explain why OmniCISA couldn’t have been done with the same kind of protections envisioned for “domestic security” surveillance under Keith and FISA, which is clearly what CISA is: notably, court review (I have suggested it is likely that FISC refused to permit this kind of surveillance).

I am grateful for Hennessey’s candor in laying out the details that a functional democracy would have laid out before eliminating the warrant requirement for some kinds of domestic wiretapping.

But it’s also worth noting that, even if you concede that permitting corporations such unfettered monitoring of their customers, even if you assume that the related info-sharing is anywhere near the most urgent thing we can do to prevent network intrusions, OmniCISA does far more than what Hennessey lays out as necessary, much of which is designed to shield all this spying, and the corporations that take part in it, from real review.

Hennessey ends her post by suggesting those of us who are concerned about OmniCISA’s broad language are ignoring limitations within it.

Despite vague allegations from critics that “cybersecurity purpose” could be read to be all-encompassing, the various definitions and limitations within the act work to create a limited set of permissible activities.

But even if that were true, it’d be meaningless given a set-up that would subject this surveillance only to Inspectors General whose past very diligent efforts to fix abuses have failed. Not even Congress will get key information — such as how often this surveillance leads to a criminal investigation or how many times “defensive measures” break the Internet — it needs to enforce what few limitations there are in this scheme.

All of which is to say that people with far more expertise than I have are reviewing this law, and their reviews only serve to confirm my earlier concerns.

image_print
19 replies
  1. orionATL says:

    the hydra, the nsa hydra.

    there is no more apt metaphor.

    each time hercules cut off one head, two grew back.

    [… Hennessey lays out why corporations need a new law to permit them to spy on their users’ content, … they used to rely on user agreements to obtain permission, but… recent court decisions… found user agreements did not amount to implied consent for such monitoring…]

  2. orionATL says:

    right up front, playwright s. hennesey sets the stage for her little melodrama:

    “Omni-CISA has passed. Privacy advocates are waxing outraged and pundits are tallying the winners and losers…. ”

    but wait, one of the characters is missing in this first scene. who? why the national security uber alles types at nsa/fbi and their corporate camp followers.

    “advocates… waxing”

    “pundits… tallying”

    for completeness sake how about

    natsec uber alles types… smirking (and corporate allies gloating? )

    ahh, symmetry.

    now, what is this little cisa melodrama about?

    a crisis.

    yep. another damned natsec crisis.

    (can you say “cisa crisis” fast ten times?)

    what is the core human emotion this playwright wants us to focus on?

    desire for privacy.

    “privacy”? from cisa? are you shittin’ me?

    no. yes. privacy.

    the cisa legislation is designed to protect your privacy and my privacy.

    damn. i missed that completely.

    ok, lights down.

  3. haarmeyer says:

    I’m feeling very unsatisfied and/or uncomfortable about the framing of this debate. It seems like all the points of view being expressed are within 1 degree of separation from the EFF on one side, and within 1 degree of separation from the NSA on the other. That leaves very little room for an alternative solution to either end to end if you forget your password you might as well smash your machine privacy on one side, and we can never keep the bad guys from killing us all without hoovering the whole of electroniclandia on the other.

    Personally, I have communications I don’t care about, and communications I do. I have snooping I care about (a large amount of which is by corporations not government) and snooping I don’t. Occasionally, I need, and use, extreme privacy measures, occasionally, I throw away information I don’t care who reads.

    That argues for a rich solution involving a lot of different parts, up to and including maybe rewriting some of how the internet works to accommodate it. I never end up hearing that.

    • emptywheel says:

      The solution to hacking is not the same question as the encryption debate, though they are related. Obviously, more encryption will make it harder to hack, but the most common vector is still phishing, will bypasses encryption altogether. And the desire for backdoors really is a conflicting instinct for govt, because it wants to limit hacking but enable wiretapping for both intelligence and criminal purposes. That’s part of the reason the debate is muddied.

      Furthermore, the middle ground should be the technologists, who may seem like EFFers but in many cases are pretty conservative but are just radical about their expectations of code.

      In any case, Kerr is actually a pretty good middle. He’s a Republican (clerked for Anthony Kennedy) and on 4A issues is pretty conservative but on hacking ones recognizes a real 1A problem with how far CFAA has extended that he sees to apply to CISA.

      • haarmeyer says:

        Reply to emptywheel @ #11 (tone intended to be neutral and polite, I hope, and no NDAs intentionally broken):

        I’m aware of the differences between hacking and encryption. I’ve done both to some degree before. I’m also aware of vectors and phishing.

        The middle ground is not occupied by the technologists that I’ve read during the debate. Those were the people I was really referring to when I said that there didn’t seem to be anyone who wasn’t one degree of separation from EFF on one side or the NSA on the other. Not that they all don’t have their own agendas as well. But the discussion of solutions is centered on what each group believes is doctrine and not on whether or not we could break a few doctrinal rules and come to something else.

        There are a lot of reasons why the debate is muddied. First off, all debates are muddied right now by adversarial fact gathering (I’m using “adversarial” the way it gets used on, say, The Intercept). In science, it’s quite alright to put up a hypothesis and recruit facts and sources to its support — as a means of generating interest in, or establishing the plausibility of the hypothesis to justify carrying out the experiments (or proofs if it is over towards the queen of sciences) to establish that hypothesis as fact. In law, and in quite a few other “generalist” professions, the recruitment itself forms the basis for establishing fact, and that’s not the same thing. So part of the muddiness comes from mixing law fact and computer science, physics, or math fact.

        And I would contend that part of it comes from the fact that none of the various framings contain all of the problem, and no attempt seems to get made to do so. Not only aren’t the framings monomorphic (they don’t distinguish between phenomena which are different), but they aren’t epimorphic (they don’t catalog all the phenomena) either.

        Here’s a thing: You detailed the Juniper compromises. One of them was caused by and closely related to the Heartbleed bug. That’s which kind in your frame? Hacking or encryption? It’s in the SSL layer, but it isn’t about the encryption itself, it opens a door on the server if the hack is done well, and it only affects the communications of people attempting to use generic encryption of their internet client to server contacts, but it’s a threat from one of those clients to the server providing the SSL support for that communication.

        The other one was apparently the installation (or so it seems) of a Q value in the Juniper source code. Again, this is an encryption problem on the outside, because if it was installed maliciously, and if someone does know the exponent for that Q value, they can listen in to compromised systems. For that reason it was described as a “backdoor” and used by the anti-backdoor people as proof of what havoc a backdoor can lead to.

        But really it’s not. It’s an example of what havoc an insider can cause by having access to source code (discounting the possibility that Juniper’s source code archive servers got hacked from the outside, which ought to be a game ender for at least some people at Juniper if that happened, together with a mortal slide in their share price). It’s also an example of what can happen when a cybersecurity threat indicator and its corresponding defensive measure are carried on the front page of the New York Times. I didn’t go out and build my own Q generator after that, but I could have, and so could a lot of other people.

        As per Orin Kerr: His article is a perfect example of what happens to a “pretty good middle” competent law argument that is informed by the 1 degree of separation experts on both sides but gets no other POVs. If I operate a content site, you know, something like Flickr or YouTube, I’m a private entity under this law and may implement monitoring for the purpose of discovering cybersecurity threats.

        But contrary to what your “experts” letter said in definitive no-grey allowed terms, the most common threats will come from steganography, file format manipulations, manipulation of compression schemes, manipulations of watermarks (steganographic alterations of the statistical moments of values in the content) all of which are in the content itself, and you know what? I would defy someone to disentangle encryption and compression if they have been mingled by the malicious operator. I’d not even place good bets they can tell which they’re looking at.

        In order to post a “cybersecurity threat indicator” in such an operating environment, you either have to post the content, in all its personal and unscrubbed glory, or you cannot post until you have the corresponding “defensive measure” already worked out, because you will need the measure to create a scrubbed version. And you have no choice on monitoring full content of accounts, either. And OBTW, for legal reasons that are bound up all the precedents that Kerr mentions, such sites store copies of all content transiting their systems indefinitely, except for what they’re deliberately kicking off their site. And in order to find “defensive measures” they typically need to use innocent content as well as malicious content to create and test such measures, so at least their computers will have to look at personal content that isn’t malicious, isn’t wrong, and is something to which people have an expectation of privacy.

        See? Private entities on which communications happens aren’t all ECSP’s. In fact, the organization I worked for when I was monitoring the content and therefore came to understand some of these things, was expressly prohibited from being an ECSP by U.S. law. Didn’t mean people didn’t communicate on the site, however. And yes, what I did and what they still do would be covered by this law, and none of the experts or lawyers you or anyone else has cited seem to know that. I read some of what they said with some fear, because it seemed to indicate what we did was technically illegal somehow. So maybe that’s what Hennessey is talking about. But if she is, she has access to far wider sources than the NSA. My guess is someone just told her such private entities existed, since she gives no examples. But I’ve been one of those examples, so I know this isn’t just a law about hoovering data by the telecoms.

        • bmaz says:

          Yes folks, you knew it was tl;dr, but that was 1,084 words of mental masturbation from Haarmeyer.
          .
          “No NDA’s broken”!!! Hahaha, I love comedy.

          • bmaz says:

            Also, I made an egregious omission on our Christmas Eve post. I have sincere fear we have also lost Freepatriot, who was the natural antidote to trolls on this blog for so long. He is a voice and force here that has been incredibly valuable in so many ways, and for so long. I hope Freep is still with us, in person or spirit. I will raise a glass to Freep tonight wherever he is.

          • haarmeyer says:

            ““No NDA’s broken”!!! Hahaha, I love comedy.” Cool. I’ll assume I didn’t break any then.

            It’s really hard to take your criticism to heart when you’ve never said a single other thing about my comments. I haven’t seen you make one substantive disagreement with anything I’ve written at all, for all your “decent and helpful.” Just a bunch of ad hominems, insults, and “troll” and “it” remarks. If you find my comments tl;dr then dr them.

            • bmaz says:

              When you say something substantively helpful, I will respond accordingly. Until then, you are just a two bit piece of self inflated shit occupying public electrons.

        • emptywheel says:

          THinking about what you said (and still thinking you’re mixing the encryption question with the CISA question). But how much of that stems from the propensity to call everything having to do with network/computer intrusion and/or Internet crime “cyber”?

          • haarmeyer says:

            A lot. There is a belief that everything “expert” can be delegated, and a belief that the essential facts necessary to govern any particular situation can be boiled down to a very few, so that most of the longwindedness of any piece of legislation can be spent doing legalistic or legislative attention to detail. The opposite of that can be seen, for instance, in the Talinn Manual, where it’s the computer experts trying to solve a legal problem.

            As for mixing the “encryption question” with the “CISA question”, they’re inherently mixed at some levels. If you believe that the problems with the Juniper code are malicious attacks, what is it you need to be able to do to find them? The external experts de-compiled Juniper’s firmware code to describe them. So what would someone looking for an attack need to do to find such a thing if the medium of transport had not been a firmware release?

            And why have there now been two decades of research put into how to detect features (which in some frames is referred to as extracting metadata) of data which is compressed or encrypted? Compression seeks to render data into a recoverable stream which has maximal entropy in every bit — i.e. it attempts to make each bit of the data novel, and therefore to eliminate patterns. Encryption attempts to maximize the mixing in the data, in part to eliminate patterns if it’s being done as algorithmic asymmetric encryption. Content analysis essentially attempts to find patterns in the data.

            So if I’m trying to analyze content streaming through, or stored on, my site, I’m attempting to reverse some of what these techniques do, and how much work I need to do to do that governs how closely I have to look at the data.

  4. orionATL says:

    susan hennessey’s article in lawgare (cited above) is entitled:
    “The Problems CISA Solves: ECPA Reform in Disguise”

    a simple question – why would ecpa need to be reformed in disguise? if reforming ecpa was a major goal of cisa, why was that not made that clear in the legislation stage?

    it’s not as if ecpa has never been “reformed” before –
    .
    from miss wiki:

    “The ECPA has been amended by the Communications Assistance for Law Enforcement Act (CALEA) of 1994, theUSA PATRIOT Act (2001), the USA PATRIOT reauthorization acts (2006), and the FISA Amendments Act (2008).[1] ”
    .

    in fact, the “reform” seems to be to use federal premption to get around inconveniently stringent court-mandated “individual consent” rules.

    if i read her right, the cisa makes private spying, e. g., google, facebook, easier by loosening constraints on corporations to obtain individual consent before accessing/keeping/using individual data.

    sweet.

    • orionATL says:

      whatever her intentions, hennessey’s article points an accusing finger at google and facebook, but more importantly from a political viewpoint at obama, mitchell, ryan, granny feinstein, and aaron burr’s relative.

      is this a fucked up legislative process we have, or what?

      under this interpretation, and adding in the u. s. c of c’s sudden interest in cisa (opposed in 2012),

      cisa is a perfect exemple of pres carter’s recent comment that the united states is now an oligopoly with unlimited political bribery.

      sweet.

    • orionATL says:

      “No. No they are catching the bad guys and girls so everything is OK.”

      exactly right. “catching the bad guys” is usually a smokescreen for getting more fed money and more unsupervised power.

  5. orionATL says:

    o. k., back to that first sneering sentence of ms. h’s article in lawfare:

    “Omni-CISA has passed. Privacy advocates are waxing outraged and pundits are tallying the winners and losers…. ”

    i understand her using that style of writing; it’s a typical journalistic ploy to cozy up to or manipulate an audience, in this case the serious, tough-minded nat sec liberals who frequent “lawfare”.

    not surprisingly, as i mentioned in #2, there is something missing from that opening statement of h’s –
    .

    missing is the elephant in the room, the 800 lb. gorilla, the anaconda in the swimming pool i refer to ms. h’s former employer, the national security agency (mas o menos), and its fellow travelers, fbi, dea, treasury, et al.

    specifically, while waxing happy, happy about cisa protecting our privacy, hennessey avoids mentioning the fact that under cisa the nsa/fbi/dea et. al., get to hold and keep the data from private “op-in” corporations/institutions and to use it for whatever, for fucin’ forever (correct me on these details) :)
    so the basic question for hennessey is how can you (other than in lawyerly disguise) assert that cisa does protect our privacy against these facts and the assertions of multiple technically knowledgeable experts?

    i don’t think you can; and i think you know that very well.

    in my view, cisa sets up a federal gov protection racket.

    how does this protection racket work?

    this way:

    a corporation fears liability from its customers if it fails to protect their data.

    congress and the obama admin step in with their protection racket and say –

    “nice business you got here, buddy. pity if you got sued because some of your customers’ data got stolen by some of those rapacious criminals on the steppes of asia or eastward.

    we can protect you. all you have to do is claim you’ve been hacked and give our nsa, fbi, etc., a copy of your customers’ data.”

    if you don’t cooperate, our enforcers, the courts, might fine you several mill, break your legs, or throw your corporate progeny in the factory overflow pond.

    your choice.”

    dept of homeland security acts as the a-1 cutoff in this protection racket.

Comments are closed.