Tuesday Morning: Wow, You Survived Business Day 1

The post-holiday season debris field continues to thin out, making its way by the truckful to the landfill. I wonder how much oil the season’s plastic wrappings consumed.

Here’s what the trash man left behind this morning.

Hackers caused power outage — the first of its kind?
Marcy’s already posted about the electrical power disruption in Ukraine this past week, labeled by some as the first known hacker-caused outage. I find the location of this malware-based outage disturbing due to its location in western Ukraine. Given the level of tensions with Russia along the eastern portion of the country, particularly near Donetsk over the past couple of years, an outage in the west seems counterintuitive if the hackers were motivated by Ukraine-Russian conflict.

And hey, look, the hackers may have used backdoors! Hoocudanode hackers would use backdoors?!

Fortunately, one government is clued in: the Dutch grok the risks inherent in government-mandated backdoors and are willing to support better encryption.

‘Netflix and chill’ in a new Volvo
I’ve never been offered a compelling case for self-driving cars. Every excuse offered — like greater fuel efficiency and reduced traffic jams — only make greater arguments for more and better public transportation.

The latest excuse: watching streaming video while not-driving is Volvo’s rationalization for developing automotive artificial intelligence.

I’m not alone in my skepticism. I suspect Isaac Asimov is rolling in his grave.

US Govt sues pollution-cheater VW — while GOP Congress seeks bailout for VW
WHAT?! Is this nuts or what? A foreign car company deliberately broke U.S. laws, damaging the environment while lying to consumers and eating into U.S.-made automotive market share. The Environmental Protection Agency filed suit against Volkswagen for its use of illegal emissions control defeat systems. The violation of consumers’ trust has yet to be addressed.

Thank goodness for the GOP-led House, which stands ready to offer a freaking bailout to a lying, cheating foreign carmaker which screwed the American public. Yeah, that’ll fix everything.

Remember conservatives whining about bailing out General Motors during 2008’s financial crisis? All of them really need a job working for VW.

Massive data breach affecting 191 million voters — and nobody wants to own up to the database problem
An infosec researcher disclosed last week a database containing records on 191 million voters was exposed. You probably heard about this already and shrugged, because data breaches happen almost daily now. No big deal, right?

Except that 191 million voters is more than the number of people who cast a vote in 2012 or even 2008 presidential elections. This database must represent more than a couple election cycles of voter data because of its size — and nobody’s responding appropriately to the magnitude of the problem.

Nobody’s owning up to the database or the problem, either.

Here’s a novel idea: perhaps Congress, instead of bailing out lying, cheating foreign automakers, ought to spend their time investigating violations of voters’ data — those folks that put them in office?

Any member of Congress not concerned about this breach should also avoid bitching about voter fraud, because hypocrisy. Ditto the DNC and the Hillary Clinton campaign.

Whew, there it is, another mark on the 2016 resolution checklist. Have you checked anything off your list yet? Fess up.

Share this entry

Power Imbalances in Ukraine

The western press is ginning up alarm because hackers caused a power outage in Ukraine.

Western Ukraine power company Prykarpattyaoblenergo reported an outage on Dec. 23, saying the area affected included regional capital Ivano-Frankivsk. Ukraine’s SBU state security service responded by blaming Russia and the energy ministry in Kiev set up a commission to investigate the matter.

While Prykarpattyaoblenergo was the only Ukraine electric firm that reported an outage, similar malware was found in the networks of at least two other utilities, said Robert Lipovsky, senior malware researcher at Bratislava-based security company ESET. He said they were ESET customers, but declined to name them or elaborate.

If you buy that this really is the first time hackers have brought down power (I don’t), it is somewhat alarming as a proof of concept. But in reality, that concept was proved by StuxNet and the attack on a German steel mill at the end of 2014.

I’m more interested in the discrepancy of coverage between this and the physical sabotage of power lines going into Crimea in November.

A state of emergency was declared after four pylons that transmit power to Crimea were blown up on Friday and Saturday night. Russia’s energy ministry scrambled to restore electricity to cities using generators, but the majority of people on the peninsula remained powerless on Saturday night.

Cable and mobile internet stopped working, though there was still mobile phone coverage, and water supplies to high-rise buildings halted.

[snip]

On Saturday, the pylons were the scene of violent clashes between activists from the Right Sector nationalist movement and paramilitary police, Ukrainian media reported. Ukrainian nationalists have long been agitating for an energy blockade of Crimea to exert pressure on the former Ukrainian territory.

There was even less attention to a smaller attack just before the New Year. (h/t joanneleon, who alerted me to it)

Officials said concrete pylons supporting power lines near the village of Bohdanivka, in southern Ukraine’s Kherson region, were damaged on Wednesday night.

“According to preliminary conclusions of experts… the pylon was damaged in an explosion,” a statement from police said on Thursday.

[snip]

Crimean Tatar activist Lenur Islyamov suggested that strong winds might have brought down the pylon and denied that Tatar activists had been behind the latest power cut.

While the physical attack did get coverage, there seemed to be little concern about the implications of an attack aiming to undercut Russian control of the peninsula. Whereas here, the attack is treated as illegitimate and a purported new line in the sand.

I get why this is the case (though the press ought to rethink their bias in reporting it this way). After all, when our allies engage in sabotage we don’t consider it as such.

But the US is just as vulnerable to physical sabotage as cyber sabotage, as an apparently still unsolved April 16, 2013 attack on a PG&E substation in Silicon Valley demonstrated, and as the case of Crimea shows, physical sabotage can be more debilitating. We should really be cautious about what we treat as normatively acceptable.

Share this entry

Monday Morning: First, Same as the Last

Hear that sound? Like so many sighs of resignation? Yup, it’s the first Monday of the new year, and with it, a plethora of shiny resolutions slowly breached and broken like WiFi-enabled toys.

One of my 2016 resolutions (which I hope will last more than a week) is a morning update here at emptywheel. Won’t be hot-urgent-newsy, just stuff worth scanning while you have a cup of joe. Let’s see if I can stick it out five days — then I’ll try another benchmark.

Droning on
Did you get or give a drone as a gift this holiday season? Better make sure it’s registered with the Federal Aviation Administration.

Twitter to bring back Politwoops
Among the stupid moves Twitter made last year was the decision to shut out Sunlight Foundation’s Politwoops platform. The tool archived politicians’ embarrassing tweets even if the tweets had been deleted. With the general election season now in full swing, voters need more accountability of candidates and elected officials, not less. Sunlight Foundation and the Open State Foundation negotiated with Twitter to restore the tool. Let’s hope it’s up and running well before the first caucuses — and let’s hope Twitter gets a grip on its business model, pronto.

You’d think by now Twitter would have figured out politicians’ tweeted gaffes are gasoline to their social media platform growth…

Microsoft spreads FUD about…Microsoft?
If you’re an oldster IT person like me, you recall the Halloween memo scandal of 1998, documenting Microsoft’s practice of promulgating fear, uncertainty, and doubt (FUD) about competing operating systems in order to gain and control Windows market share. For more than a decade, Microsoft relied on FUD to ensure near-ubiquity of Windows and Word software products. Now Microsoft is using FUD not to prevent customers from using other products, but to encourage migration from Windows 7 to Windows 10, to reduce possible state-sponsored attacks on Win 7 systems.

Personally, I think Microsoft has already been ridiculously ham-handed in its push for Win 10 upgrades before this latest FUD. If you are a Win 7 or Win 8 user, you’ve already seen attempts to migrate users embedded in recent security patches (read: crapware). I’ve had enough FUD for a lifetime — I’m already running open source operating systems Linux and Android on most of my devices. I would kill for an Android desktop or laptop (yoohoo, hint-hint, Android developers…).

And don’t even start with the “Buy Apple” routine. Given the large number of vulnerabilities, it’s only a matter of time before Mac OS and iOS attract the same level of attention from hackers as Windows. I’ll hold my AAPL stock as long as you insist on “Buy Apple,” however.

Consumer Electronics Show 2016 — now with biometric brassieres
CES 2016 opens this week in Las Vegas, and all I can think is: Are you fucking kidding me with this fresh Internet of Things stupidity? A biometric bra? What idiot dreamed this up?

Why not biometric jockstraps? I can only imagine the first response to biometric jockstraps: “No EMF radiation near my ‘nads!” Yeah, well the same thing applies to breasts. Didn’t anybody get the memo last year that 217 scientists have expressed concerns about EMF’s potential impact on human health, based on +2,000 peer-reviewed articles?

Or are businesses ignoring this science the same way petrochemical businesses have ignored climate change science?

Phew. There it is, the first checkmark of my 2016 resolutions. Happy first Monday to you. Did you make any New Year’s resolutions? Do tell.

Share this entry

Why Is Congress Undercutting PCLOB?

As I noted last month, the Omnibus budget bill undercut the Privacy and Civil Liberties Oversight Board in two ways.

First, it affirmatively limited PCLOB’s ability to review covert actions. That effort dates to June, when Republicans responded to PCLOB Chair David Medine’s public op-ed about drone oversight by ensuring PCLOB couldn’t review the drone or any other covert program.

More immediately troublesome, last minute changes to OmniCISA eliminated a PCLOB review of the implementation of that new domestic cyber surveillance program, even though some form of that review had been included in all three bills that passed Congress. That measure may have always been planned, but given that it wasn’t in any underlying version of the bill, more likely dates to something that happened after CISA passed the Senate in October.

PCLOB just released its semi-annual report to Congress, which I wanted to consider in light of Congress’ efforts to rein in what already was a pretty tightly constrained mandate.

The report reveals several interesting details.

First, while the plan laid out in April had been to review one CIA and one NSA EO 12333 program, what happened instead is that PCLOB completed a review on two CIA EO 12333 programs, and in October turned towards one NSA EO 12333 program (the reporting period for this report extended from April 1 to September 30).

In July, the Board voted to approve two in-depth examinations of CIA activities conducted under E.O. 12333. Board staff has subsequently attended briefings and demonstrations, as well as obtained relevant documents, related to the examinations.

The Board also received a series of briefings from the NSA on its E.O. 12333 activities. Board staff held follow-up sessions with NSA personnel on the topics covered and on the agency’s E.O. 12333 implementing procedures. Just after the conclusion of the Reporting Period, the Board voted to approve one in-depth examination of an NSA activity conducted under E.O. 12333. Board staff are currently engaging with NSA staff to gather additional information and documents in support of this examination.

That’s interesting for two reasons. First, it means there are two EO 12333 programs that have a significant impact on US persons, which is pretty alarming since CIA is not supposed to focus on Americans. It also means that the PCLOB could have conducted this study on covert operations between the time Congress first moved to prohibit it and the time that bill was signed into law. There’s no evidence that’s what happened, but the status report, while noting it had been prohibited from accessing information on covert actions, didn’t seem all that concerned about it.

Section 305 is a narrow exception to the Board’s statutory right of access to information limited to a specific category of matters, covert actions.

Certainly, it seems like PCLOB got cooperation from CIA, which would have been unlikely if CIA knew it could stall any review until the Intelligence Authorization passed.

But unless PCLOB was excessively critical of CIA’s EO 12333 programs, that’s probably not why Congress eliminated its oversight role in OmniCISA.

Mind you, it’s possible it was. Around the time the CIA review should have been wrapping up though also in response to the San Bernardino attack, PCLOB commissioner Rachel Brand (who was the lone opponent to review of EO 12333 programs in any case) wrote an op-ed suggesting public criticism and increased restrictions on intelligence agencies risked making the intelligence bureaucracy less effective (than it already is, I would add but she didn’t).

In response to the public outcry following the leaks, Congress enacted several provisions restricting intelligence programs. The president unilaterally imposed several more restrictions. Many of these may protect privacy. Some of them, if considered in isolation, might not seem a major imposition on intelligence gathering. But in fact none of them operate in isolation. Layering all of these restrictions on top of the myriad existing rules will at some point create an encrusted intelligence bureaucracy that is too slow, too cautious, and less effective. Some would say we have already reached that point. There is a fine line between enacting beneficial reforms and subjecting our intelligence agencies to death by a thousand cuts.

Still, that should have been separate from efforts focusing on cybersecurity.

There was, however, one thing PCLOB did this year that might more directly have led to Congress’ elimination of what would have been a legislatively mandated role in cybersecurity related privacy: its actions under EO 13636, which one of the EOs that set up a framework that OmniCISA partly fulfills. Under the EO, DHS and other departments working on information sharing to protect critical infrastructure were required to produce a yearly report on how such shared affected privacy and civil liberties.

The Chief Privacy Officer and the Officer for Civil Rights and Civil Liberties of the Department of Homeland Security (DHS) shall assess the privacy and civil liberties risks of the functions and programs undertaken by DHS as called for in this order and shall recommend to the Secretary ways to minimize or mitigate such risks, in a publicly available report, to be released within 1 year of the date of this order. Senior agency privacy and civil liberties officials for other agencies engaged in activities under this order shall conduct assessments of their agency activities and provide those assessments to DHS for consideration and inclusion in the report. The report shall be reviewed on an annual basis and revised as necessary. The report may contain a classified annex if necessary. Assessments shall include evaluation of activities against the Fair Information Practice Principles and other applicable privacy and civil liberties policies, principles, and frameworks. Agencies shall consider the assessments and recommendations of the report in implementing privacy and civil liberties protections for agency activities.

As PCLOB described in its report, “toward the end of the reporting period” (that is, around September), it was involved in interagency meetings discussing privacy.

The Board’s principal work on cybersecurity has centered on its role under E.O. 13636. The Order directs DHS to consult with the Board in developing a report assessing the privacy and civil liberties implications of cybersecurity information sharing and recommending ways to mitigate threats to privacy and civil liberties. At the beginning of the Reporting Period, DHS issued its second E.O. 13636 report. In response to the report, the Board wrote a letter to DHS commending DHS and the other reporting agencies for their early engagement, standardized report format, and improved reporting. Toward the end of the Reporting Period, the Board commenced its participation in its third annual consultation with DHS and other agencies reporting under the Order regarding privacy and civil liberties policies and practices through interagency meetings.

That would have come in the wake of the problems DHS identified, in a letter to Al Franken, with the current (and now codified into law) plan for information sharing under OmniCISA.

Since that time, Congress has moved first to let other agencies veto DHS’ privacy scrubs under OmniCISA and, in final execution, provided a way to create an entire bypass of DHS in the final bill before even allowing DHS as much time as it said it needed to set up the new sharing portal.

That is, it seems that the move to take PCLOB out of cybersecurity oversight accompanied increasingly urgent moves to take DHS out of privacy protection.

All this is just tea leaf reading, of course. But it sure seems that, in addition to the effort to ensure that PCLOB didn’t look too closely at CIA’s efforts to spy on — or drone kill — Americans, Congress has also decided to thwart PCLOB and DHS’ efforts to put some limits on how much cybersecurity efforts impinge on US person privacy.

Share this entry

Legal Analysis of OmniCISA Reinforces Cause for Concern

Among all the commentaries about CISA published before its passage, only one I know of (aside from my non-lawyer take here) dealt with what the bill did legally: this Jennifer Granick post explaining how OmniCISA will “stake out a category of ISP monitoring that the FCC and FTC can’t touch, regardless of its privacy impact on Americans,” thereby undercutting recent efforts to increase online privacy.

Since the bill passed into law, however, two lawyers have written really helpful detailed posts on what it does: Fourth Amendment scholar Orin Kerr and former NSA lawyer Susan Hennessey.

As Kerr explains, existing law had permitted Internet operators to surveil their own networks for narrowly tailored upkeep and intrusion purposes. OmniCISA broadened that to permit a provider to monitor (or have a third party monitor) both the network and traffic for a cybersecurity purpose.

[T]he right to monitor appears to extend to “cybersecurity purposes” generally, not just for the protection of the network operator’s own interests.  And relatedly, the right to monitor includes scanning and acquiring data that is merely transiting the system, which means that the network operator can monitor (or have someone else monitor) for cybersecurity purposes even if the operator isn’t worried about his own part of the network being the victim. Note the difference between this and the provider exception. The provider exception is about protecting the provider’s own network. If I’m reading the language here correctly, this is a broader legal privilege to monitor for cybersecurity threats.

It also permits such monitoring for insider threats.

[T]he Cyber Act may give network operators broad monitoring powers on their own networks to catch not only hackers but also insiders trying to take information from the network.

This accords with Hennessey’s take (and of course, having recently worked at NSA, she knows what they were trying to do). Importantly, she claims providers need to surveil content to take “responsible cybersecurity measures.”

Effective cybersecurity includes network monitoring, scanning, and deep-packet inspection—and yes, that includes contents of communications—in order to detect malicious activity.

In spite of the fact that Hennessey explicitly responded to Granick’s post, and Granick linked a letter from security experts describing the limits of what was really necessary for monitoring networks, Hennessey doesn’t engage in those terms to explain why corporations need to spy on their customers’ content to take responsible cybersecurity measures. It may be as simple as needing to search the contents of packets for known hackers’ signatures, or it may relate to surveilling IP theft or it may extend to reading the content of emails; those are fairly different degrees of electronic surveillance, all of which might be permitted by this law. But credit Hennessey for making clear what CISA boosters in Congress tried so assiduously to hide: this is about warrantless surveillance of content.

Hennessey lays out why corporations need a new law to permit them to spy on their users’ content, suggesting they used to rely on user agreements to obtain permission, but pointing to several recent court decisions that found user agreements did not amount to implied consent for such monitoring.

If either party to a communication consents to its interception, there is no violation under ECPA, “unless such communication is intercepted for the purpose of committing any criminal or tortious act.” 18 USC 2511(2)(d). Consent may be express or implied but, in essence, authorized users must be made aware of and manifest agreement to the interception.

At first glance, obtaining effective consent from authorized users presents a simple and attractive avenue for companies and cyber security providers to conduct monitoring without violating ECPA. User agreements can incorporate notification that communications may be monitored for purposes of network security. However, the ambiguities of ECPA have resulted in real and perceived limitations on the ability to obtain legally-effective consent.

Rapidly evolving case law generates significant uncertainty regarding the scope of consent as it relates to electronic communications monitoring conducted by service providers. In Campbell v. Facebook, a court for the Northern District of California denied Facebook’s motion to dismiss charges under ECPA, rejecting the claim that Facebook had obtained user consent. Despite lengthy user agreements included in Facebook’s “Statement of Rights and Responsibilities” and “Data Use Policy,” the court determined that consent obtained “with respect to the processing and sending of messages does not necessarily constitute consent to … the scanning of message content for use in targeted advertising.” Likewise in ln re Google Inc. Gmail Litigation, the same district determined that Google did not obtain adequate consent for the scanning of emails, though in that case, Google’s conduct fell within the “ordinary course of business” definition and thus did not constitute interception for the purposes of ECPA.

Here, and in other instances, courts have determined that companies which are highly sophisticated actors in the field have failed to meet the bar for effective consent despite good faith efforts to comply.

Hennssey’s focus on cases affecting Facebook and, especially, Google provide a pretty clear idea why those and other tech companies were pretending to oppose CISA without effectively doing so (Google’s Eric Schmidt had said such a law was necessary, but he wasn’t sure if this law was what was needed).

Hennessey goes on to extend these concerns to third party permission (that is, contractors who might monitor another company’s network, which Kerr also noted). Perhaps most telling is her discussion of  those who don’t count as electronic communications service providers.

Importantly, a large number of private entities require network security monitoring but are not themselves electronic communication service providers. For those entities that do qualify as service providers, it is not unlawful to monitor communications while engaged in activity that is a “necessary incident to” the provision of service or in order to protect the “rights or property” of the provider. But this exception is narrowly construed. In general, it permits providers the right “to intercept and monitor [communications] placed over their facilities in order to combat fraud and theft of service.” U.S. v. Villanueva, 32 F. Supp. 2d 635, 639 (S.D.N.Y. 1998). In practice, the exception does not allow for unlimited or widespread monitoring nor does it, standing alone, expressly permit the provision of data collected under this authority to the government or third parties.

Note how she assumes non-ECSPs would need to conduct “unlimited” monitoring and sharing with the government and third parties. That goes far beyond her claims about “responsible cybersecurity measures,” without any discussion of how such unlimited monitoring protects privacy (which is her larger claim).

Curiously, Hennessey entirely ignores what Kerr examines (and finds less dangerous than tech companies’ statements indicated): counter–er, um, defensive measures, which tech companies had worried would damage their infrastructure. As I noted, Richard Burr went out of his way to prevent Congress from getting reporting on whether that happened, which suggests it’s a real concern. Hennessey also ignores something that totally undermines her claim this is about “responsible cybersecurity measures” — the regulatory immunity that guts the tools the federal government currently uses to require corporations to take such measures. She also doesn’t explain why OmniCISA couldn’t have been done with the same kind of protections envisioned for “domestic security” surveillance under Keith and FISA, which is clearly what CISA is: notably, court review (I have suggested it is likely that FISC refused to permit this kind of surveillance).

I am grateful for Hennessey’s candor in laying out the details that a functional democracy would have laid out before eliminating the warrant requirement for some kinds of domestic wiretapping.

But it’s also worth noting that, even if you concede that permitting corporations such unfettered monitoring of their customers, even if you assume that the related info-sharing is anywhere near the most urgent thing we can do to prevent network intrusions, OmniCISA does far more than what Hennessey lays out as necessary, much of which is designed to shield all this spying, and the corporations that take part in it, from real review.

Hennessey ends her post by suggesting those of us who are concerned about OmniCISA’s broad language are ignoring limitations within it.

Despite vague allegations from critics that “cybersecurity purpose” could be read to be all-encompassing, the various definitions and limitations within the act work to create a limited set of permissible activities.

But even if that were true, it’d be meaningless given a set-up that would subject this surveillance only to Inspectors General whose past very diligent efforts to fix abuses have failed. Not even Congress will get key information — such as how often this surveillance leads to a criminal investigation or how many times “defensive measures” break the Internet — it needs to enforce what few limitations there are in this scheme.

All of which is to say that people with far more expertise than I have are reviewing this law, and their reviews only serve to confirm my earlier concerns.

Share this entry

The Heroic IRS Agent Story Should Raise More Questions about Silk Road Investigation

“In these technical investigations, people think they are too good to do the stupid old-school stuff. But I’m like, ‘Well, that stuff still works.’ ”

The NYT got this and many other direct quotes from IRS agent Gary Alford for a complimentary profile of him that ran on Christmas day. According to the story, Alford IDed Ross Ulbricht as a possible suspect for the Dread Pirate Roberts — the operator of the Dark Web site Silk Road — in early June 2013, but it took until September for Alford to get the prosecutor and DEA and FBI Agents working the case to listen to him. The profile claims Alford’s tip was “crucial,” though a typo suggests NYT editors couldn’t decide whether it was the crucial tip or just crucial.

In his case, though, the information he had was the crucial [sic] to solving one of the most vexing criminal cases of the last few years.

On its face, the story (and Alford’s quote) suggests the FBI is so entranced with its hacking ability that it has neglected very, very basic investigative approaches like Google searches. Indeed, if the story is true, it serves as proof that encryption and anonymity don’t thwart FBI investigations as much as Jim Comey would like us to believe when he argues the Bureau needs to back door all our communications.

But I don’t think the story tells the complete truth about the Silk Road investigation. I say that, first of all, because of the timing of Alford’s efforts to get others to further investigate Ulbricht. As noted, the story describes Alford IDing Ulbricht as a potential suspect in early June 2013, after which he put Ulbricht’s name in a DEA database of potential suspects, which presumably should have alerted anyone else on the team that US citizen Ross Ulbricht was a potential suspect in the investigation.

Mr. Alford’s preferred tool was Google. He used the advanced search option to look for material posted within specific date ranges. That brought him, during the last weekend of May 2013, to a chat room posting made just before Silk Road had gone online, in early 2011, by someone with the screen name “altoid.”

“Has anyone seen Silk Road yet?” altoid asked. “It’s kind of like an anonymous Amazon.com.”

The early date of the posting suggested that altoid might have inside knowledge about Silk Road.

During the first weekend of June 2013, Mr. Alford went through everything altoid had written, the online equivalent of sifting through trash cans near the scene of a crime. Mr. Alford eventually turned up a message that altoid had apparently deleted — but that had been preserved in the response of another user.

In that post, altoid asked for some programming help and gave his email address: [email protected]. Doing a Google search for Ross Ulbricht, Mr. Alford found a young man from Texas who, just like Dread Pirate Roberts, admired the free-market economist Ludwig von Mises and the libertarian politician Ron Paul — the first of many striking parallels Mr. Alford discovered that weekend.

When Mr. Alford took his findings to his supervisors and failed to generate any interest, he initially assumed that other agents had already found Mr. Ulbricht and ruled him out.

But he continued accumulating evidence, which emboldened Mr. Alford to put Mr. Ulbricht’s name on the D.E.A. database of potential suspects, next to the aliases altoid and Dread Pirate Roberts.

At the same time, though, Mr. Alford realized that he was not being told by the prosecutors about other significant developments in the case — a reminder, to Mr. Alford, of the lower status that the I.R.S. had in the eyes of other agencies. And when Mr. Alford tried to get more resources to track down Mr. Ulbricht, he wasn’t able to get the surveillance and the subpoenas he wanted.

Alford went to the FBI and DOJ with Ulbricht’s ID in June 2013, but FBI and DOJ refused to issue even subpoenas, much less surveil Ulbricht.

But over the subsequent months, Alford continued to investigate. In “early September” he had a colleague do another search on Ulbricht, which revealed he had been interviewed by Homeland Security in July 2013 for obtaining fake IDs.

In early September, he asked a colleague to run another background check on Mr. Ulbricht, in case he had missed something.

The colleague typed in the name and immediately looked up from her computer: “Hey, there is a case on this guy from July.”

Agents with Homeland Security had seized a package with nine fake IDs at the Canadian border, addressed to Mr. Ulbricht’s apartment in San Francisco. When the agents visited the apartment in mid-July, Mr. Ulbricht answered the door, and the agents identified him as the face on the IDs, without having any idea of his potential links to Silk Road.

When Alford told prosecutor Serrin Turner of the connection (again, this is September 2013), the AUSA finally did his own search in yet another database, the story claims, only to discover Ulbricht lived in the immediate vicinity of where Dread Pirate Roberts was accessing Silk Road. And that led the Feds to bust Ulbricht.

I find the story — the claim that without Alford’s Google searches, FBI did not and would not have IDed Ulbricht — suspect for two reasons.

First, early June is the date that FBI Agent Christopher Tarbell’s declaration showed (but did not claim) FBI first hacked Silk Road. That early June date was itself suspect because Tarbell’s declaration really showed data from as early as February 2013 (which is, incidentally, when Alford was first assigned to the team). In other words, while it still seems likely FBI was always lying about when it hacked into Silk Road, the coincidence between when Alford says he went to DOJ and the FBI with Ulbricht’s ID and when the evidence they were willing to share with the defense claimed to have first gotten a lead on Silk Road is of interest. All the more so given that the FBI claimed it could legally hack the server because it did not yet know the server was run by an American, and so it treated the Iceland-based server as a foreigner for surveillance purposes.

One thing that means is that DOJ may not have wanted to file paperwork to surveil Ulbricht because admitting they had probable cause to suspect an American was running Silk Road would make their hack illegal (and/or would have required FBI to start treating Ulbricht as the primary target of the investigation; it seems FBI may have been trying to do something else with this investigation). By delaying the time when DOJ took notice of the fact that Silk Road was run by an American, they could continue to squat on Silk Road without explaining to a judge what they were doing there.

The other reason I find this so interesting is because several of the actions to which corrupt DEA agent Carl Force pled guilty — selling fake IDs and providing inside information — took place between June and September 2013, during the precise period when everyone was ignoring Alford’s evidence and the fact that he had entered Ulbricht’s name as a possible alias for the Dread Pirate Roberts into a DEA database. Of particular note, Force’s guilty plea only admitted to selling the fake IDs for 400 bitcoin, and provided comparatively few details about that action, but the original complaint against Force explained he had sold the IDs for 800 bitcoin but refunded Ulbricht 400 bitcoin because “the deal for the fraudulent identification documents allegedly fell through” [emphasis mine].

Were those fake IDs that Force sold Ulbricht the ones seized by Homeland Security and investigated in July 2013? Did the complaint say the deal “allegedly” fell through because it didn’t so much fall through as get thwarted? Did something — perhaps actions by Force — prevent other team members from tying that seizure to Ulbricht? Or did everyone know about it, but pretend not to, until Alford made them pay attention (perhaps with a communications trail that other Feds couldn’t suppress)? Was the ID sale part of the investigation, meant to ID Ulbricht’s identity and location, but Force covered it up?

In other words, given the record of Force’s actions, it seems more likely that at least some people on the investigative team already knew what Alford found in a Google search, but for both investigative (the illegal hack that FBI might have wanted to extend for other investigative reasons) and criminal (the money Force was making) reasons, no one wanted to admit that fact.

Now, I’m not questioning the truth of what Alford told the NYT. But even his story (which is corroborated by people “briefed on the investigation,” but only one person who actually attended any of the meetings for it; most of those people are silent about Alford’s claims) suggests there may be other explanations why no one acted on his tip, particularly given the fact that he appears to have been unable to do database searches himself and that they refused to do further investigation into Ulbricht. (I also wonder whether Alford’s role explains why the government had the IRS in San Francisco investigate Force and corrupt Secret Service Agent Shaun Bridges, rather than New York, where agents would have known these details.)

Indeed, I actually think this complimentary profile might have been a way for Alford to expose further cover-ups in the Silk Road investigation without seeming to do so for any but self-interested reasons. Bridges was sentenced on December 7. Ulbricht was originally supposed to have submitted his opening appellate brief — focusing on Fourth Amendment issues that may be implicated by these details — on December 11, but on December 2, the court extended that deadline until January 12.

I don’t know whether Ulbricht’s defense learned these details. I’m admittedly not familiar enough with the public record to know, though given the emphasis on Tarbell’s declaration as the explanation for how they discovered Ulbricht and the NYT’s assertion Alford’s role and the delay was “largely left out of the documents and proceedings that led to Mr. Ulbricht’s conviction and life sentence this year,” I don’t think it is public. But if they didn’t, then the fact that the investigative team went out of their way to avoid confirming Ulbricht’s readily accessible identity until at least three and probably seven months after they started hacking Silk Road, even while key team members were stealing money from the investigation, might provide important new details about the government’s actions.

And if Alford gets delayed credit for doing simple Google searches as a result, all the better!

Share this entry

If a Close US Ally Backdoored Juniper, Would NSA Tell Congress?

You may have heard that Juniper Networks announced what amounts to a backdoor in its virtual private networks products. Here’s Kim Zetter’s accessible intro of what security researchers have learned so far. And here’s some technical background from Matthew Green.

As Zetter summarizes, the short story is that some used weaknesses encouraged by NSA to backdoor the security product protecting a lot of American businesses.

They did this by exploiting weaknesses the NSA allegedly placed in a government-approved encryption algorithm known as Dual_EC, a pseudo-random number generator that Juniper uses to encrypt traffic passing through the VPN in its NetScreen firewalls. But in addition to these inherent weaknesses, the attackers also relied on a mistake Juniper apparently made in configuring the VPN encryption scheme in its NetScreen devices, according to Weinmann and other cryptographers who examined the issue. This made it possible for the culprits to pull off their attack.

As Green describes, the key events probably happened at least as early as 2007 and 2012 (contrary to the presumption of surveillance hawk Stewart Baker looking to scapegoat those calling for more security). Which means this can’t be a response to the Snowden document strongly suggesting the NSA had pushed those weaknesses in Dual_EC.

I find that particularly interesting, because it suggests whoever did this either used public discussions about the weakness of Dual_EC, dating to 2007, to identify and exploit this weakness, or figured out what (it is presumed) the NSA was up to. That suggests two likely culprits for what has been assumed to be a state actor behind this: Israel (because it knows so much about NSA from having partnered on things like StuxNet) or Russia (which was getting records on the FiveEyes’ SIGINT activities from its Canadian spy, Jeffrey Delisle).  The UK would be another obvious guess, except an Intercept article describing how NSA helped UK backdoor Juniper suggests they used another method.

Which leads me back to an interesting change I noted between CISA — the bill passed by the Senate back in October — and OmniCISA — the version passed last week as part of the omnibus funding bill. OmniCISA still required the Intelligence Community to provide a report on the most dangerous hacking threats, especially state actors, to the Intelligence Committees. But it eliminated a report for the Foreign Relations Committees on the same topic. I joked at the time that that was probably to protect Israel, because no one wants to admit that Israel spies and has greater ability to do so by hacking than other nation-states, especially because it surely learns our methods by partnering with us to hack Iran.

Whoever hacked Juniper, the whole incident offers a remarkable lesson in the dangers of backdoors. Even as FBI demands a backdoor into Apple’s products, it is investigating who used a prior US-sponsored backdoor to do their own spying.

Share this entry

So-Called Oversight in OmniCISA

I did a working thread of the surveillance portion of the version of CISA in the omnibus funding bill here. The short version: it is worse even than CISA was on most counts, although there are a few changes — such as swapping “person” in all the privacy guidelines to “individual” that will have interesting repercussions for non-biological persons.

As I said in that post, I’m going to do a closer look at the privacy provisions that didn’t get stripped from the bill; the biggest change, though, is to eliminate a broad biennial review by the Privacy and Civil Liberties Oversight Board entirely, replacing it with a very narrow assessment, by the Comptroller (?!) of whether the privacy scrub is working. Along with the prohibition on PCLOB accessing information from covert ops that got pulled in as part of the Intelligence Authorization incorporated into the bill, it’s clear the Omnibus as a whole aims to undercut PCLOB.

So here’s what counts as “oversight” in OmniCISA. Note, the “appropriate Federal agencies” are the agencies that automatically get information under the sharing system:

(A) The Department of Commerce

(B) The Department of Defense

(C) The Department of Energy

(D) The Department of Homeland Security

(E) The Department of Justice

(F) The Department of the Treasury

(G) The Office of the Director of National Intelligence

Report on Implementation

Timing: less than one year after passage

Completed by: heads of appropriate Federal agencies

This is basically a report on whether the information sharing bureaucracy is working to share information effectively. It totally blows off privacy questions and doesn’t require an independent assessment. This report includes:

(A) An evaluation of the effectiveness of real-time information sharing through the capability and process developed under section 105 (c), including any impediments to such real-time sharing.

(B) An assessment of whether cyber threat indicators or defensive measures have been properly classified and an accounting of the number of security clearances authorized by the Federal Government for the purpose of sharing cyber threat indicators or defensive measures with the private sector.

(C) The number of cyber threat indicators or defensive measures received through the capability and process developed under section 105(c).

(D) A list of Federal entities that have received cyber threat indicators or defensive measures under this title.

Bienniel Report on Compliance

Timing: At least every two years

Completed by: Inspectors General of appropriate Federal agencies, plus Intelligence Community and Council of Inspectors General on Financial Oversight

This report assesses both the same efficacy questions reviewed within a year and privacy protections. But it swaps out a general requirement that the IGs assess, “The degree to which such information may affect the privacy and civil liberties of specific persons,” with (D)(ii), below, which is tied to whether information is “related to a cybersecurity threat.” Since everything collected would be “related to” (collected because of some technical connection to) a cyberthreat, it basically undercuts the likelihood of a too-broad interpretation of “related to” undercutting privacy.

It includes:

(A) An assessment of the sufficiency of the policies, procedures, and guidelines relating to the sharing of cyber threat indicators within the Federal Government, including those policies, procedures, and guidelines relating to the removal of information not directly related to a cybersecurity threat that is personal information of a specific individual or information that identifies a specific individual.

(B) An assessment of whether cyber threat indicators or defensive measures have been properly classified and an accounting of the number of security clearances authorized by the Federal Government for the purpose of sharing cyber threat indicators or defensive measures with the private sector.

(C) A review of the actions taken by the Federal Government based on cyber threat indicators or defensive measures shared with the Federal Government under this title, including a review of the following:

(i) The appropriateness of subsequent uses and disseminations of cyber threat indicators or defensive measures.

(ii) Whether cyber threat indicators or defensive measures were shared in a timely and adequate manner with appropriate entities, or, if appropriate, were made publicly available.

(D) An assessment of the cyber threat indicators or defensive measures shared with the appropriate Federal entities under this title, including the following:

(i) The number of cyber threat indicators or defensive measures shared through the capability and process developed under section 105(c).

(ii) An assessment of any information not directly related to a cybersecurity threat that is personal information of a specific individual or information  identifying a specific individual and was shared by a non-Federal government entity with the Federal government in contravention of this title, or was shared within the Federal Government in contravention of the guidelines required by this title, including a description of any significant violation of this title.

(iii) The number of times, according to the Attorney General, that information shared under this title was used by a Federal entity to prosecute an offense listed in section 105(d)(5)(A).

(iv) A quantitative and qualitative assessment of the effect of the sharing of cyber threat indicators or defensive measures with the Federal Government on privacy and civil liberties of specific individuals, including the number of notices that were issued with respect to a failure to remove information not directly related to a cybersecurity threat that was personal information of a specific individual or information that identified a specific individual in accordance with the procedures required by section 105(b)(3)(E).

(v) The adequacy of any steps taken by the Federal Government to reduce any adverse effect from activities carried out under this title on the privacy and civil liberties of United States persons.

(E) An assessment of the sharing of cyber threat indicators or defensive measures among Federal entities to identify inappropriate barriers to sharing information.

Independent Report on Removal of Personal Information

Timing: Not later than 3 years after passage

Completed by: Comptroller General

This review will measure “the actions taken by the Federal Government to remove personal information from cyber threat indicators or defensive measures pursuant to this title,” assessing whether the policies and procedures established by the bill are sufficient to address concerns about privacy and civil liberties.

 

Share this entry

Working Thread, Cybersecurity Act

As I’ve been reporting, Paul Ryan added a version of the Cybersecurity Information Sharing Act to the omnibus. It starts on page 1728. This will be my working thread.

(1745) They’ve changed what gets stripped from “person” to “individual,” thereby not requiring that corporate names get stripped.

(1747) The bill takes out CISA’s requirement of getting authorization before using an indicator for law enforcement.

(1753) The language on ensuring there are audit capabilities (but not that they’re used) takes out this language, which was in CISA.

C) consistent with this title, any other applicable provisions of law, and the fair information practice principles set forth in appendix A of the document entitled “National Strategy for Trusted Identities in Cyberspace” and published by the President in April, 2011, govern the retention, use, and dissemination by the Federal Government of cyber threat indicators shared with the Federal Government under this title, including the extent, if any, to which such cyber threat indicators may be used by the Federal Government; and

(1754) This section replaced an “or” in CISA with the underlined “and,” which I think sharply constrains the list of stuff that shouldn’t be shared. (It also replaces “person” with “individual” as consistent with other changes.)

(i) Identification of types of information that would qualify as a cyber threat indicator under this title that would be unlikely to include information that—

(I) is not directly related to a cybersecurity threat; and

(II) is personal information of a specific individual or information that identifies a specific individual.

(1755) OmnibusCISA requires the AG to make both the interim and final privacy guidelines public; CISA had only made interim ones public.

jointly issue and make publicly available final guidelines

(1760) The clause noting that other info sharing is still permissible adds the underlined language.

(i) reporting of known or suspected criminal activity, by a non-Federal entity to any other non-Federal entity or a Federal entity, including cyber threat indicators or defensive measures shared with a Federal entity in furtherance of opening a Federal law enforcement investigation;

(1761-2) The bill basically gives DHS 90 days (60, really) to set up its portal before the President can declare the need to set up a competing one. This also involves slightly different timing on notice to Congress of whether DHS manages to pull it together in 90 days.

IN GENERAL.—At any time after certification is submitted under subparagraph (A), the President may designate an appropriate Federal entity, other than the Department of Defense (including the National Security Agency), to develop and implement a capability and process as described in paragraph (1) in addition to the capability and process developed under such paragraph by the Secretary of Homeland Security, if, not fewer than 30 days before making such designation, the President submits to Congress a certification and explanation that—

(I) such designation is necessary to ensure that full, effective, and secure operation of a capability and process for the Federal Government to receive from any non-Federal entity cyber threat indicators or defensive measures under this title;

 (1766) The OmniCISA is slightly better on threat of death sharing as it must be specific.

(iii) the purpose of responding to, or otherwise preventing or mitigating, a specific threat of death, a specific threat of serious bodily harm, or a specific threat of serious economic harm, including a terrorist act or a use of a weapon of mass destruction;

(1768-9) Wow. The regulatory exception is even bigger than it was under CISA. Here’s what CISA said (underline added in both):

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this title shall not be directly used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any entity, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

And here’s what OmniCISA says:

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this title shall not be  used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any non-Federal entity or any activities taken by a non-Federal entity pursuant to mandatory standards, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

(1771) The Rule of Construction is more permissive in OmniCISA, too. Compare CISA:

(c) Construction.—Nothing in this section shall be construed—

(1) to require dismissal of a cause of action against an entity that has engaged in gross negligence or willful misconduct in the course of conducting activities authorized by this title; or

With OmniCISA.

CONSTRUCTION.—Nothing in this title shall be construed—

(1) to create—

(A) a duty to share a cyber threat indicator or defensive measure; or

(B) a duty to warn or act based on the receipt of a cyber threat indicator or defensive measure; or

Whereas CISA still permitted the government to pursue a company for gross negligence, OmniCISA instead makes clear that companies can ignore cyber information they get shared from the government.

(1771) I’m going to circle back and compare the various oversight reporting from all four bills in more detail. But the big takeaway is that they’ve stripped a PCLOB review from all 3 of the underlying bills.

(1782) I’m not sure what this new language does. A lawyer who works in this area thinks it protects Brady obligations. I hope he’s right and it’s not, instead, a way to eat limits on the use for prosecution.

(n) CRIMINAL PROSECUTION.—Nothing in this title shall be construed to prevent the disclosure of a cyber threat indicator or defensive measure shared under this title in a case of criminal prosecution, when an applicable provision of Federal, State, tribal, or local law requires disclosure in such case.

(1783) In a (long-overdue) report on how to deal with hacking, OmniCISA takes out a report on this topic specifically done for the Foreign Relations Committee, suggesting this information will remain classified and potentially unavailable to the committees. I guess they have to hide Israel’s spying.

(2) A list and an assessment of the countries and nonstate actors that are the primary threats of carrying out a cybersecurity threat, including a cyber attack, theft, or data breach, against the United States and which threaten the United States national security, economy, and intellectual property.

(1785) This is the sunset language. It doesn’t seem to sunset anything.

(a) IN GENERAL.—Except as provided in subsection 3 (b), this title and the amendments made by this title shall be effective during the period beginning on the date of the enactment of this Act and ending on September 30, 2025.

Share this entry

“Encryption” Is Just Intel Code for “Failure to Achieve Omniscience”

After receiving a briefing on the San Bernardino attack, Richard Burr went out and made two contradictory claims. First, Burr — and or other sources for The Hill — said that there was no evidence the Tashfeen Malik and Syed Rizwan Farook used encryption.

Lawmakers on Thursday said there was no evidence yet that the two suspected shooters used encryption to hide from authorities in the lead-up to last week’s San Bernardino, Calif., terror attack that killed 14 people.

“We don’t know whether it played a part in this attack,” Senate Intelligence Committee Chairman Richard Burr (R-N.C.) told reporters following a closed-door briefing with federal officials on the shootings.

That’s consistent with what we know so far. After all, a husband and wife wouldn’t need to — or have a way of — encrypting their communications with each other, as it would be mostly face-to-face. The fact that they tried to destroy their devices (and apparently got rid of a still undiscovered hard drive) suggests they weren’t protecting that via encryption, but rather via physical destruction. That doesn’t rule out using both, but the FBI would presumably know if the devices they’re reconstructed were encrypted.

So it makes sense that the San Bernardino attacks did not use encryption.

But then later in the same discussion with reporters, Burr suggested Malik and Farook must have used encryption because the IC didn’t know about their attack.

Burr suggested it might have even played a role in the accused San Bernardino shooters — Tashfeen Malik and Syed Rizwan Farook — going unnoticed for years, despite the FBI saying they had been radicalized for some time.

“Any time you glean less information at the beginning, clearly encryption probably played a role in it,” he said. “And there were a lot of conversations that went on between these two individuals before [Malik] came to the United States that you would love to have some insight to other than after an attack took place.”

This is a remarkable comment!

After all, the FBI and NSA don’t even read all the conversations of foreigners, as Malik would still have legally been, that they can. Indeed, if these conversations were in Arabic or Urdu, the IC would only have had them translated if there were some reason to find them interesting. And even in spite of the pair’s early shooting training, it’s not apparent they had extensive conversations, particularly not online, to guide that training.

Those details would make it likely that the IC would have had no reason to be interested. To say nothing of the fact that ultimately “radicalization” is a state of mind, and thus far, NSA doesn’t have a way to decrypt thoughts.

But this is the second attack in a row, with Paris, where Burr and others have suggested that their lack of foreknowledge of the attack makes it probable the planners used encryption. Burr doesn’t even seem to be considering a number of other things, such as good operational security, languages, and metadata failures might lead the IC to miss warning signs, even assuming they’re collecting everything (there should have been no legal limits to their ability to collect on Malik).

We’re not having a debate about encryption anymore. We’re debating making the Internet less secure to excuse the IC’s less-than-perfect-omniscience.

Share this entry