Posts

AT&T Says Its Voluntary Sharing of Customer Data Is Classified

Back in October, I wondered whether companies would be able to claim they had chosen not to participate in CISA’s voluntary data sharing in their transparency reports. While CISA prohibits the involuntary disclosure of such participation, I don’t know that anything prohibits the voluntary disclosure, particularly of non-participation.

A related question is playing out right now over a shareholder resolution filed by Arjuna Capital asking AT&T to reveal its voluntary sharing with law enforcement and intelligence agencies.

The resolution asks only for a report on sharing that is not legally mandated, and exempts any information that is legally protected.

Resolved, shareholders request that the Company issue a report, at reasonable
expense and excluding proprietary or legally protected information, clarifying the
Company’s policies regarding providing information to law enforcement and
intelligence agencies, domestically and internationally, above and beyond what is legally required by court order or other legally mandated process, whether and how
the policies have changed since 2013, and assessing risks to the Company’s finances
and operations arising from current and past policies and practices.

AT&T has asked the SEC for permission to ignore this resolution based, in part, on the claim that its voluntary cooperation would be a state secret that requires AT&T to effectively Glomar its own shareholders.

Screen Shot 2016-01-14 at 10.08.35 AM

The Sidley Austin opinion cites the Espionage Act for its claim that this information is a state secret. It also pretends this is all about the NSA, when FBI and DEA play a critical role in some of the surveillance AT&T is believed to willingly participate in.

The resolution doesn’t seem to include any question specifically addressing OmniCISA participation, though it was written before final passage of OmniCISA last month.

The response from AT&T raises interesting questions about whether a telecom (or other electronic communications service provider) can be obligated for voluntary activity.

Will James Clapper Be the First Known Victim of OmniCISA’s Regulatory Immunity?

According to Medium, Crackas With Attitude just hacked James Clapper and his wife.

One of the group’s hackers, who’s known as “Cracka,” contacted me on Monday, claiming to have broken into a series of accounts connected to Clapper, including his home telephone and internet, his personal email, and his wife’s Yahoo email. While in control of Clapper’s Verizon FiOS account, Cracka claimed to have changed the settings so that every call to his house number would get forwarded to the Free Palestine Movement.

[snip]

The hacker also sent me a list of call logs to Clapper’s home number. In the log, there was a number listed as belonging to Vonna Heaton, an executive at Ball Aerospace and a former senior executive at the National Geospatial-Intelligence Agency. When I called that number, the woman who picked up identified as Vonna Heaton. When I told her who I was, she declined to answer any questions.

Viscerally, I’m laughing my ass off that Verizon (among others) has shared Clapper’s metadata without his authority. “Not wittingly,” they might say if he asks them about that. But I recognize that it’s actually not a good thing for someone in such a sensitive position to have his metadata exposed (I mean, to the extent that it wasn’t already exposed in the OPM hack).

I would also find some amusement if Clapper ends up being the first public victim of OmniCISA’s regulatory immunity for corporations.

Yahoo and Verizon can self-report this cyber intrusion to DHS, and if they do then the government can’t initiate regulatory action against them for giving inadequate protection from hacking for the Director of National Intelligence’s data.

And whether or not Clapper is the first victim of OmniCISA’s regulatory immunity, he is among the first Americans that the passage of OmniCISA failed to protect from hacking.

 

Why Is Congress Undercutting PCLOB?

As I noted last month, the Omnibus budget bill undercut the Privacy and Civil Liberties Oversight Board in two ways.

First, it affirmatively limited PCLOB’s ability to review covert actions. That effort dates to June, when Republicans responded to PCLOB Chair David Medine’s public op-ed about drone oversight by ensuring PCLOB couldn’t review the drone or any other covert program.

More immediately troublesome, last minute changes to OmniCISA eliminated a PCLOB review of the implementation of that new domestic cyber surveillance program, even though some form of that review had been included in all three bills that passed Congress. That measure may have always been planned, but given that it wasn’t in any underlying version of the bill, more likely dates to something that happened after CISA passed the Senate in October.

PCLOB just released its semi-annual report to Congress, which I wanted to consider in light of Congress’ efforts to rein in what already was a pretty tightly constrained mandate.

The report reveals several interesting details.

First, while the plan laid out in April had been to review one CIA and one NSA EO 12333 program, what happened instead is that PCLOB completed a review on two CIA EO 12333 programs, and in October turned towards one NSA EO 12333 program (the reporting period for this report extended from April 1 to September 30).

In July, the Board voted to approve two in-depth examinations of CIA activities conducted under E.O. 12333. Board staff has subsequently attended briefings and demonstrations, as well as obtained relevant documents, related to the examinations.

The Board also received a series of briefings from the NSA on its E.O. 12333 activities. Board staff held follow-up sessions with NSA personnel on the topics covered and on the agency’s E.O. 12333 implementing procedures. Just after the conclusion of the Reporting Period, the Board voted to approve one in-depth examination of an NSA activity conducted under E.O. 12333. Board staff are currently engaging with NSA staff to gather additional information and documents in support of this examination.

That’s interesting for two reasons. First, it means there are two EO 12333 programs that have a significant impact on US persons, which is pretty alarming since CIA is not supposed to focus on Americans. It also means that the PCLOB could have conducted this study on covert operations between the time Congress first moved to prohibit it and the time that bill was signed into law. There’s no evidence that’s what happened, but the status report, while noting it had been prohibited from accessing information on covert actions, didn’t seem all that concerned about it.

Section 305 is a narrow exception to the Board’s statutory right of access to information limited to a specific category of matters, covert actions.

Certainly, it seems like PCLOB got cooperation from CIA, which would have been unlikely if CIA knew it could stall any review until the Intelligence Authorization passed.

But unless PCLOB was excessively critical of CIA’s EO 12333 programs, that’s probably not why Congress eliminated its oversight role in OmniCISA.

Mind you, it’s possible it was. Around the time the CIA review should have been wrapping up though also in response to the San Bernardino attack, PCLOB commissioner Rachel Brand (who was the lone opponent to review of EO 12333 programs in any case) wrote an op-ed suggesting public criticism and increased restrictions on intelligence agencies risked making the intelligence bureaucracy less effective (than it already is, I would add but she didn’t).

In response to the public outcry following the leaks, Congress enacted several provisions restricting intelligence programs. The president unilaterally imposed several more restrictions. Many of these may protect privacy. Some of them, if considered in isolation, might not seem a major imposition on intelligence gathering. But in fact none of them operate in isolation. Layering all of these restrictions on top of the myriad existing rules will at some point create an encrusted intelligence bureaucracy that is too slow, too cautious, and less effective. Some would say we have already reached that point. There is a fine line between enacting beneficial reforms and subjecting our intelligence agencies to death by a thousand cuts.

Still, that should have been separate from efforts focusing on cybersecurity.

There was, however, one thing PCLOB did this year that might more directly have led to Congress’ elimination of what would have been a legislatively mandated role in cybersecurity related privacy: its actions under EO 13636, which one of the EOs that set up a framework that OmniCISA partly fulfills. Under the EO, DHS and other departments working on information sharing to protect critical infrastructure were required to produce a yearly report on how such shared affected privacy and civil liberties.

The Chief Privacy Officer and the Officer for Civil Rights and Civil Liberties of the Department of Homeland Security (DHS) shall assess the privacy and civil liberties risks of the functions and programs undertaken by DHS as called for in this order and shall recommend to the Secretary ways to minimize or mitigate such risks, in a publicly available report, to be released within 1 year of the date of this order. Senior agency privacy and civil liberties officials for other agencies engaged in activities under this order shall conduct assessments of their agency activities and provide those assessments to DHS for consideration and inclusion in the report. The report shall be reviewed on an annual basis and revised as necessary. The report may contain a classified annex if necessary. Assessments shall include evaluation of activities against the Fair Information Practice Principles and other applicable privacy and civil liberties policies, principles, and frameworks. Agencies shall consider the assessments and recommendations of the report in implementing privacy and civil liberties protections for agency activities.

As PCLOB described in its report, “toward the end of the reporting period” (that is, around September), it was involved in interagency meetings discussing privacy.

The Board’s principal work on cybersecurity has centered on its role under E.O. 13636. The Order directs DHS to consult with the Board in developing a report assessing the privacy and civil liberties implications of cybersecurity information sharing and recommending ways to mitigate threats to privacy and civil liberties. At the beginning of the Reporting Period, DHS issued its second E.O. 13636 report. In response to the report, the Board wrote a letter to DHS commending DHS and the other reporting agencies for their early engagement, standardized report format, and improved reporting. Toward the end of the Reporting Period, the Board commenced its participation in its third annual consultation with DHS and other agencies reporting under the Order regarding privacy and civil liberties policies and practices through interagency meetings.

That would have come in the wake of the problems DHS identified, in a letter to Al Franken, with the current (and now codified into law) plan for information sharing under OmniCISA.

Since that time, Congress has moved first to let other agencies veto DHS’ privacy scrubs under OmniCISA and, in final execution, provided a way to create an entire bypass of DHS in the final bill before even allowing DHS as much time as it said it needed to set up the new sharing portal.

That is, it seems that the move to take PCLOB out of cybersecurity oversight accompanied increasingly urgent moves to take DHS out of privacy protection.

All this is just tea leaf reading, of course. But it sure seems that, in addition to the effort to ensure that PCLOB didn’t look too closely at CIA’s efforts to spy on — or drone kill — Americans, Congress has also decided to thwart PCLOB and DHS’ efforts to put some limits on how much cybersecurity efforts impinge on US person privacy.

Legal Analysis of OmniCISA Reinforces Cause for Concern

Among all the commentaries about CISA published before its passage, only one I know of (aside from my non-lawyer take here) dealt with what the bill did legally: this Jennifer Granick post explaining how OmniCISA will “stake out a category of ISP monitoring that the FCC and FTC can’t touch, regardless of its privacy impact on Americans,” thereby undercutting recent efforts to increase online privacy.

Since the bill passed into law, however, two lawyers have written really helpful detailed posts on what it does: Fourth Amendment scholar Orin Kerr and former NSA lawyer Susan Hennessey.

As Kerr explains, existing law had permitted Internet operators to surveil their own networks for narrowly tailored upkeep and intrusion purposes. OmniCISA broadened that to permit a provider to monitor (or have a third party monitor) both the network and traffic for a cybersecurity purpose.

[T]he right to monitor appears to extend to “cybersecurity purposes” generally, not just for the protection of the network operator’s own interests.  And relatedly, the right to monitor includes scanning and acquiring data that is merely transiting the system, which means that the network operator can monitor (or have someone else monitor) for cybersecurity purposes even if the operator isn’t worried about his own part of the network being the victim. Note the difference between this and the provider exception. The provider exception is about protecting the provider’s own network. If I’m reading the language here correctly, this is a broader legal privilege to monitor for cybersecurity threats.

It also permits such monitoring for insider threats.

[T]he Cyber Act may give network operators broad monitoring powers on their own networks to catch not only hackers but also insiders trying to take information from the network.

This accords with Hennessey’s take (and of course, having recently worked at NSA, she knows what they were trying to do). Importantly, she claims providers need to surveil content to take “responsible cybersecurity measures.”

Effective cybersecurity includes network monitoring, scanning, and deep-packet inspection—and yes, that includes contents of communications—in order to detect malicious activity.

In spite of the fact that Hennessey explicitly responded to Granick’s post, and Granick linked a letter from security experts describing the limits of what was really necessary for monitoring networks, Hennessey doesn’t engage in those terms to explain why corporations need to spy on their customers’ content to take responsible cybersecurity measures. It may be as simple as needing to search the contents of packets for known hackers’ signatures, or it may relate to surveilling IP theft or it may extend to reading the content of emails; those are fairly different degrees of electronic surveillance, all of which might be permitted by this law. But credit Hennessey for making clear what CISA boosters in Congress tried so assiduously to hide: this is about warrantless surveillance of content.

Hennessey lays out why corporations need a new law to permit them to spy on their users’ content, suggesting they used to rely on user agreements to obtain permission, but pointing to several recent court decisions that found user agreements did not amount to implied consent for such monitoring.

If either party to a communication consents to its interception, there is no violation under ECPA, “unless such communication is intercepted for the purpose of committing any criminal or tortious act.” 18 USC 2511(2)(d). Consent may be express or implied but, in essence, authorized users must be made aware of and manifest agreement to the interception.

At first glance, obtaining effective consent from authorized users presents a simple and attractive avenue for companies and cyber security providers to conduct monitoring without violating ECPA. User agreements can incorporate notification that communications may be monitored for purposes of network security. However, the ambiguities of ECPA have resulted in real and perceived limitations on the ability to obtain legally-effective consent.

Rapidly evolving case law generates significant uncertainty regarding the scope of consent as it relates to electronic communications monitoring conducted by service providers. In Campbell v. Facebook, a court for the Northern District of California denied Facebook’s motion to dismiss charges under ECPA, rejecting the claim that Facebook had obtained user consent. Despite lengthy user agreements included in Facebook’s “Statement of Rights and Responsibilities” and “Data Use Policy,” the court determined that consent obtained “with respect to the processing and sending of messages does not necessarily constitute consent to … the scanning of message content for use in targeted advertising.” Likewise in ln re Google Inc. Gmail Litigation, the same district determined that Google did not obtain adequate consent for the scanning of emails, though in that case, Google’s conduct fell within the “ordinary course of business” definition and thus did not constitute interception for the purposes of ECPA.

Here, and in other instances, courts have determined that companies which are highly sophisticated actors in the field have failed to meet the bar for effective consent despite good faith efforts to comply.

Hennssey’s focus on cases affecting Facebook and, especially, Google provide a pretty clear idea why those and other tech companies were pretending to oppose CISA without effectively doing so (Google’s Eric Schmidt had said such a law was necessary, but he wasn’t sure if this law was what was needed).

Hennessey goes on to extend these concerns to third party permission (that is, contractors who might monitor another company’s network, which Kerr also noted). Perhaps most telling is her discussion of  those who don’t count as electronic communications service providers.

Importantly, a large number of private entities require network security monitoring but are not themselves electronic communication service providers. For those entities that do qualify as service providers, it is not unlawful to monitor communications while engaged in activity that is a “necessary incident to” the provision of service or in order to protect the “rights or property” of the provider. But this exception is narrowly construed. In general, it permits providers the right “to intercept and monitor [communications] placed over their facilities in order to combat fraud and theft of service.” U.S. v. Villanueva, 32 F. Supp. 2d 635, 639 (S.D.N.Y. 1998). In practice, the exception does not allow for unlimited or widespread monitoring nor does it, standing alone, expressly permit the provision of data collected under this authority to the government or third parties.

Note how she assumes non-ECSPs would need to conduct “unlimited” monitoring and sharing with the government and third parties. That goes far beyond her claims about “responsible cybersecurity measures,” without any discussion of how such unlimited monitoring protects privacy (which is her larger claim).

Curiously, Hennessey entirely ignores what Kerr examines (and finds less dangerous than tech companies’ statements indicated): counter–er, um, defensive measures, which tech companies had worried would damage their infrastructure. As I noted, Richard Burr went out of his way to prevent Congress from getting reporting on whether that happened, which suggests it’s a real concern. Hennessey also ignores something that totally undermines her claim this is about “responsible cybersecurity measures” — the regulatory immunity that guts the tools the federal government currently uses to require corporations to take such measures. She also doesn’t explain why OmniCISA couldn’t have been done with the same kind of protections envisioned for “domestic security” surveillance under Keith and FISA, which is clearly what CISA is: notably, court review (I have suggested it is likely that FISC refused to permit this kind of surveillance).

I am grateful for Hennessey’s candor in laying out the details that a functional democracy would have laid out before eliminating the warrant requirement for some kinds of domestic wiretapping.

But it’s also worth noting that, even if you concede that permitting corporations such unfettered monitoring of their customers, even if you assume that the related info-sharing is anywhere near the most urgent thing we can do to prevent network intrusions, OmniCISA does far more than what Hennessey lays out as necessary, much of which is designed to shield all this spying, and the corporations that take part in it, from real review.

Hennessey ends her post by suggesting those of us who are concerned about OmniCISA’s broad language are ignoring limitations within it.

Despite vague allegations from critics that “cybersecurity purpose” could be read to be all-encompassing, the various definitions and limitations within the act work to create a limited set of permissible activities.

But even if that were true, it’d be meaningless given a set-up that would subject this surveillance only to Inspectors General whose past very diligent efforts to fix abuses have failed. Not even Congress will get key information — such as how often this surveillance leads to a criminal investigation or how many times “defensive measures” break the Internet — it needs to enforce what few limitations there are in this scheme.

All of which is to say that people with far more expertise than I have are reviewing this law, and their reviews only serve to confirm my earlier concerns.