Posts

A Tale of Two Malware Researchers: DOJ Presented Evidence Yu Pingan Knew His Malware Was Used as Such

The government revealed the arrest in California of a Chinese national, Yu Pingan, who is reportedly associated with the malware involved in the OPM hack.

The complaint that got him arrested, however, has nothing to do with the OPM hack. Rather, it involves four US companies (none of which are in the DC area), at least some of which are probably defense contractors.

Company A was headquartered in San Diego, California, Company B was headquartered in Massachusetts, Company C was headquartered in Los Angeles, California, and Company D was headquartered in Arizona.

Yu is introduced as a “malware broker.” But deep in the affidavit, the FBI describes Yu as running a site selling malware as a penetration testing tool.

UCC #1 repeatedly obtained malware from YU. For example, on or about March 3, 2013, YU emailed UCC #1 samples of two types of malware: “adjesus” and “hkdoor.” The FBI had difficulty deciphering adjesus, but open source records show that it was previously sold as a penetration testing tool (which is what legitimate security researchers call their hacking. tools) on the website penelab.com.5 Part of the coding for the second piece of malware, hkdoor, indicated that “Penelab” had created it for a customer named “Fangshou.”6 Seized communications and open source records show that YU ran the penelab.com website (e.g., he used his email address and real name to register it) and that UCC #1 used the nickname “Fangshou.”

For that reason — and because Yu was arrested as he arrived in the US for a conference — a few people have questioned whether a fair comparison can be made between Yu and Marcus Hutchins, AKA MalwareTech.

It’s an apples to oranges comparison, as DOJ rather pointedly hasn’t shared the affidavit behind Hutchins’ arrest warrant, so we don’t have as much detail on Hutchins. That said, Hutchins’ indictment doesn’t even allege any American victims, whereas Yu’s complaint makes it clear he (or his malware) was involved in hacking four different American companies (and yet, thus far, Yu has been accused with fewer crimes than Hutchins has).

In any case, at least what we’ve been given shows a clear difference. Over a year before providing Unindicted Co-Conspirator 1 two more pieces of malware, the complaint shows, UCC #1 told Yu he had compromised Microsoft Korea’s domain.

YU and UCC #1 ‘s communications include evidence tying them to the Sakula malware. On or about November 10, 2011, UCC #1 told YU that he had compromised the legitimate Korean Microsoft domain used to download software updates for Microsoft products. UCC #1 provided the site http://update.microsoft.kr/hacked.asp so YU could confirm his claim. UCC #1 explained that he could not use the URL to distribute fraudulent updates, but the compromised site could be used for hacking attacks known as phishing.

So unlike in Hutchins’ case, DOJ has provided evidence (and there’s more in the affidavit) that Yu knew he was providing malware to hack companies.

Indeed, unless the government has a lot more evidence against Hutchins (more on that in a second), it’s hard to see why they’ve been charged with the same two crimes, Conspiracy to violate CFAA and CFAA.

HPSCI: We Must Spy Like Snowden To Prevent Another Snowden

I was going to write about this funny part of the HPSCI report anyway, but it makes a nice follow-up to my post on Snowden and cosmopolitanism, on the importance of upholding American values to keeping the servants of hegemon working to serve it.

As part of its attack on Edward Snowden released yesterday, the House Intelligence Committee accused Snowden of attacking his colleagues’ privacy.

To gather the files he took with him when he left the country for Hong Kong, Snowden infringed on the privacy of thousands of government employees and contractors. He obtained his colleagues’ security credentials through misleading means, abused his access as a systems administrator to search his co-workers’ personal drives, and removed the personally identifiable information of thousands of IC employees and contractors.

I have no doubt that many — most, perhaps — of Snowden’s colleagues feel like he violated their privacy, especially as their identities are now in the possession of a number of journalists. So I don’t make light of that, or the earnestness with which HPSCI’s sources presumably made this complaint (though IC employee privacy is one of the things all journalists who have reported these stories have redacted, to the best of my knowledge).

But it’s a funny claim for several reasons. Even ignoring that what the NSA does day in and day out is search people’s personal communications (including millions of innocent people), this kind of broad access is the definition of a SysAdmin.

HPSCI apparently never had a problem with techs getting direct access to our dragnet metadata, as they had and (now working in pairs) still have, for those of us two degrees away from a suspect.

Plus, HPSCI has never done anything publicly to help the 21 million clearance holders whose PII China now holds. Is it possible they’re more angry at Snowden than they are at China’s hackers, who have more ill-intent than Snowden?

But here’s the other reason this complaint is laugh-out-loud funny. HPSCI closes its report this way:

Finally, the Committee remains concerned that more than three years after the start of the unauthorized disclosures, NSA and the IC as a whole, have not done enough to minimize the risk of another massive unauthorized disclosure. Although it is impossible to reduce the change of another Snowden to zero, more work can and should be done to improve the security of the people and the computer networks that keep America’s most closely held secrets. For instance, a recent DOD Inspector General report directed by the Committee had yet to effectively implement its post-Snowden security improvements. The Committee has taken actions to improve IC information security in the Intelligence Authorization Acts for Fiscal Years 2014, 2015, 2016, and 2017, and looks forward to working with the IC to continue to improve security.

First, that timeline — showing an effort to improve network security in each year following the Snowden leaks — is completely disingenuous. It neglects to mention that the Intel Committees have actually been trying for longer than that. In the wake of the Manning leaks, it became clear that DOD’s networks were sieve-like. Congress tried to require network monitoring in the 2012 Intelligence Authorization. But the Administration responded by insisting 2013 — 3 years after Manning’s leaks — was too soon to plug all the holes in DOD’s networks. One reason Snowden succeeded in downloading all those files is because the network monitoring hadn’t been rolled out in Hawaii yet.

So HPSCI is trying to pretend Intel Committee past efforts didn’t actually precede Snowden by several years, but those efforts failed to stop Snowden.

The other reason I find this paragraph — which appears just four paragraphs after it attacks Snowden for the invasion of his colleagues’ privacy — so funny is that in the 2014 Intelligence Authorization (that is, the first one after the Snowden leaks), HPSCI codified an insider threat program, requiring the Director of National Intelligence to,

ensure that the background of each employee or officer of an element of the intelligence community, each contractor to an element of the intelligence community, and each individual employee of such a contractor who has been determined to be eligible for access to classified information is monitored on a continual basis under standards developed by the Director, including with respect to the frequency of evaluation, during the period of eligibility of such employee or officer of an element of the intelligence community, such contractor, or such individual employee to such a contractor to determine whether such employee or officer of an element of the intelligence community, such contractor, and such individual employee of such a contractor continues to meet the requirements for eligibility for access to classified information;

This insider threat program searches IC employees hard drives (one of Snowden’s sins).

Then, the following year, HPSCI got even more serious, mandating that the Director of National Intelligence look into credit reports, commercially available data, and social media accounts to hunt down insider threats, including by watching for changes in ideology like those Snowden exhibited, developing an outspoken concern about the Fourth Amendment.

I mean, on one hand, this isn’t funny at all — and I imagine that Snowden’s former colleagues blame him that they have gone from having almost no privacy as cleared employees to having none. This is what people like Carrie Cordero mean when they regret the loss of trust at the agency.

But as I have pointed out in the past, if someone like Snowden — who at least claims to have had good intentions — can walk away with the crown jewels, we should presume some much more malicious and/or greedy people have as well.

But here’s the thing: you cannot, as Cordero does, say that the “foreign intelligence collection activities [are] done with detailed oversight and lots of accountability” if it is, at the same time, possible for a SysAdmin to walk away with the family jewels, including raw data on targets. If Snowden could take all this data, then so can someone maliciously spying on Americans — it’s just that that person wouldn’t go to the press to report on it and so it can continue unabated. In fact, in addition to rolling out more whistleblower protections in the wake of Snowden, NSA has made some necessary changes (such as not permitting individual techs to have unaudited access to raw data anymore, which appears to have been used, at times, as a workaround for data access limits under FISA), even while ratcheting up the insider threat program that will, as Cordero suggested, chill certain useful activities. One might ask why the IC moved so quickly to insider threat programs rather than just implementing sound technical controls.

The Intelligence world has gotten itself into a pickle, at once demanding that a great deal of information be shared broadly, while trying to hide what information that includes, even from American citizens. It aspires to be at once an enormous fire hose and a leak-proof faucet. That is the inherent impossibility of letting the secret world grow so far beyond management — trying to make a fire hose leak proof.

Some people in the IC get that — I believe this is one of the reasons James Clapper has pushed to rein in classification, for example.

But HPSCI, the folks overseeing the fire hose? They don’t appear to realize that they’re trying to replicate and expand Snowden’s privacy violations, even as they condemn them.

Mix and Match Cyber-Priorities Likely Elevates Gut Check To National Level

As I Screen Shot 2016-07-27 at 5.34.29 PMnoted yesterday, earlier this week President Obama rolled out a new Presidential Policy Directive, PPD 41, which made some changes to the way the US will respond to cyberattacks.(PPD, annex, fact sheet, guideline) I focused yesterday on the shiny new Cyber Orange Alert system. But the overall PPD was designed to better manage the complexity of responding to cyberattacks — and was a response, in part, to confusion from private sector partners about the role of various government agencies.

That experience has allowed us to hone our approach but also demonstrated that significant cyber incidents demand a more coordinated, integrated, and structured response.  We have also heard from the private sector the need to provide clarity and guidance about the Federal government’s roles and responsibilities.   The PPD builds on these lessons and institutionalizes our cyber incident coordination efforts in numerous respects,

The PPD integrates response to cyberattacks with the existing PPD on responding to physical incidents, which is necessary (actually, the hierarchy should probably be reversed, as our physical infrastructure is in shambles) but is also scary because there’s a whole lot of executive branch authority that gets asserted in such things.

And the PPD sets out clear roles for responding to cyberattacks: “threat response” (investigating) is the FBI’s baby; “asset response” (seeing the bigger picture) is DHS’s baby; “intelligence support” (analysis) is ODNI’s baby, with lip service to the importance of keeping shit running, whether within or outside of the federal government.

To establish accountability and enhance clarity, the PPD organizes Federal response activities into three lines of effort and establishes a Federal lead agency for each:

  • Threat response activities include the law enforcement and national security investigation of a cyber incident, including collecting evidence, linking related incidents, gathering intelligence, identifying opportunities for threat pursuit and disruption, and providing attribution.   The Department of Justice, acting through the Federal Bureau of Investigation (FBI) and the National Cyber Investigative Joint Task Force (NCIJTF), will be the Federal lead agency for threat response activities.
  • Asset response activities include providing technical assets and assistance to mitigate vulnerabilities and reducing the impact of the incident, identifying and assessing the risk posed to other entities and mitigating those risks, and providing guidance on how to leverage Federal resources and capabilities.   The Department of Homeland Security (DHS), acting through the National Cybersecurity and Communications Integration Center (NCCIC), will be the Federal lead agency for asset response activities.  The PPD directs DHS to coordinate closely with the relevant Sector-Specific Agency, which will depend on what kind of organization is affected by the incident.
  • Intelligence Support and related activities include intelligence collection in support of investigative activities, and integrated analysis of threat trends and events to build situational awareness and to identify knowledge gaps, as well as the ability to degrade or mitigate adversary threat capabilities.  The Office of the Director of National Intelligence, through the Cyber Threat Intelligence Integration Center, will be the Federal lead agency for intelligence support and related activities.

In addition to these lines of effort, a victim will undertake a wide variety of response activities in order to maintain business or operational continuity in the event of a cyber incident.  We recognize that for the victim, these activities may well be the most important.  Such efforts can include communications with customers and the workforce; engagement with stakeholders, regulators, or oversight bodies; and recovery and reconstitution efforts.   When a Federal agency is a victim of a significant cyber incident, that agency will be the lead for this fourth line of effort.  In the case of a private victim, the Federal government typically will not play a role in this line of effort, but will remain cognizant of the victim’s response activities consistent with these principles and coordinate with the victim.

Thus far, this just seems like an effort to stop everyone from stepping on toes, though it also raises concerns for me whether this is the first step (or the public sign) of Obama implementing a second portal for CISA, which would permit (probably) FBI to get Internet crime data directly without going through DHS’s current scrub process. Unspoken, of course, is that necessity for a new PPD means there has been toe-stepping in incident response in the last while, which is particularly interesting when you consider the importance of the OPM breach and the related private sector hacks. Just as one example, is it possible that no one took the threat information from the Anthem hack and started looking around to see where else it was happening.

So yeah, some concerning things here, but I can see the interest in minimizing the toe-stepping as we continue to get pwned in multiple breaches.

Also, there’s no mention of NSA here. Shhhh. They’re here, as soon as an entity asks them for help and (from an intelligence perspective with data laundered through FBI and ODNI and DHS) from an intelligence perspective.

Here’s what I find particularly interesting about all this.

The PPD — along with the fancy Cyber Orange Alert system — came out less than a week after DOJ’s Inspector General released a report on the FBI’s means of prioritizing cyber threats (which is different than cyber attacks). The report basically found that the FBI has improved its cyber response (there’s some interesting discussion about a 2012 reorganization into threat type rather than attack location that I suspect may have implications for both criminal venue and analytical integrity, including for the attack on the DNC server), but that the way in which it prioritized its work didn’t result in prioritizing the biggest threats, in part because it was basically a “gut check” and in part because the ranking process wasn’t done frequently enough to reflect changes in the nature of a given threat (there was a classified example of a threat that had grown but been missed and of conflicting measures in the two ways FBI assesses threats, both of which are likely very instructive). The report does mention the OPM hack as proof that the threat is getting bigger, which does not confirm nor deny that it was one of the classified issues redacted.

The FBI conducts a bureau-wide Threat Review and Prioritization (TRP) process, of which cyber is a part, which happens to have the same number of outcomes as the PPD 41 does, 6, though it is more of a table cross-referencing impact with mitigation (the colors come from DOJ IG so comparing them would be meaningless).

Screen Shot 2016-07-28 at 5.45.21 PM

And the FBI TRP asks some of the same questions as the PPD’s Cyber Orange Alert system does.

The FBI’s Directorate of Intelligence (DI) manages the TRP process and publishes standard guidance for the operational divisions and field offices to use; including the criteria for the impact level of the threat and the mitigation resources needed to address the threat. The FBI impact level criteria attempt to measure the likely damage to U.S. critical infrastructure, key resources, public safety, U.S. economy, or the Integrity and operations of government agencies in the coming ear based upon FBI’s current understanding of the threat issue. Impact level criteria seek to represent the negative consequences of the threat issue, nationally. The impact level criteria include: (1) these threat issues are likely to cause he greatest damage to national interests or public safety in the coming year; (2) these threat issues are likely to cause great damage to national interests or public safety in the coming year; (3) these threat issues are likely to cause moderate damage to national interests or public safety in the coming year; or (4) these threat issues are likely to cause minimal damage to national interests or public safety in he coming year (FBI emphasis added). 12 One FBI official told us that these impact criteria questions, which are developed and controlled by the Directorate of Intelligence, are designed to be interpreted by the operational divisions.

The three levels of mitigation criteria, which also are standard across the FBI, measure the effectiveness of current FBI investigative and intelligence activity based upon the following general criteria: ( 1) effectiveness of FBI operational activities; (2} operational division understanding of the threat issue at the national level; and {3) evolution of the threat issue as it pertains to adapting or establishing mitigation action.

This is the system that people DOJ IG interviewed described as a “gut check.”

While the criteria are standardized, we found that they were inherently subjective. One FBI official told us that the prioritization of the threats was essentially a “gut check.” Other FBI officials told us that the TRP is vague and arbitrary. The Cyber Division Assistant Director told us that the TRP criteria are subjective and assessments can be based on the “loudest person in the room.”

There was some tweaking of this system in March, but DOJ IG said it didn’t affect the findings of this report.

FBI has another newer system called Threat Examination and Scoping (TExAS; it claimed it was far more advanced in its own 9/11 review report a few years back), which they also only use once a year, but which at least is driven by objective questions to carry out the prioritization. DOJ IG basically found this better system suffered the things you always find at FBI: data entry problems, a lack of standard operating procedures, stove-piped management, disconnection from FBI’s other data system. But it said that if TExAS fixed those issues and made it more objective it would be the tool the FBI needs to properly prioritize threats.

There’s one detail of particular interest. The report narrative described one advantage of TExAS as that it could integrate information from other agencies, foreign, or private partners.

According to FBI officials, TExAS has the capability to include intelligence from other agencies, the United States Intelligence Community, private industry, and foreign partners to inform FBI’s prioritization and strategy. For example, a response in TExAS can be supported with documentation from a United States Intelligence Community partner for a threat as to which the FBI lacks visibility. The tool also is capable of providing data visualizations, which can help inform FBI decision makers about prioritizing or otherwise allocating resources toward new national security cyber intrusion threats, or towards national security intrusion threats where more intelligence is needed.

But way down in the appendix, it describes what appears to be this same ability to integrate information on which the “FBI lacks visibility” as a “classification limitation” that requires analysts to review the rankings to tweak them to account for the classified information.

Screen Shot 2016-07-28 at 5.59.33 PM

In other words, because of classification issues (see?? I told you NSA was here!!), even the system that might become objective will still be subject to these reviews by analysts who are privy to the secret information.

Now I’m not sure that makes PPD 41’s own prioritization system fatal — aside from the fact that it seems like it will be a gut check, too. Though it does lead me to wonder whether FBI didn’t adequately prioritize some growing threat (cough, OPM) and as a result — the DOJ IG report admits — FBI simply wouldn’t dedicate the resources to investigate it until it really blew up. Under PPD-41, it would seem ODNI would do some of this anyway, which would eliminate some of the visibility problems.

I point all this out, mostly, because of the timing. Last week, DOJ IG said FBI needed to stop gut checking which cyber threats were most important. This week, the White House rolled out a broad new PPD, including a somewhat different assessment system that determines how many federal agencies get to step on cyber-toes.

The OPM Hack Is One Big Reason Apple Couldn’t Guarantee Its Ability to Keep FBiOS Safe

Underlying the legal debate about whether the government can demand that Apple write an operating system that will make it easier to brute force Syed Rizwan Farook’s phone is another debate, about whether the famously secretive tech company could keep such code safe from people trying to compromise iPhones generally.

The government asserted, in its response to Apple’s motion to overturn the All Writs Act order, that Apple’s concerns about retaining such code are overblown.

[C]ontrary to Apple’s stated fears, there is no reason to think that the code Apple writes in compliance with the Order will ever leave Apple’s possession. Nothing in the Order requires Apple to provide that code to the government or to explain to the government how it works. And Apple has shown it is amply capable of protecting code that could compromise its security. For example, Apple currently protects (1) the source code to iOS and other core Apple software and (2) Apple’s electronic signature, which as described above allows software to be run on Apple hardware. (Hanna Decl. Ex. DD at 62-64 (code and signature are “the most confidential trade secrets [Apple] has”).) Those —which the government has not requested—are the keys to the kingdom. If Apple can guard them, it can guard this.

Even if “criminals, terrorists, and hackers” somehow infiltrated Apple and stole the software necessary to unlock Farook’s iPhone (Opp. 25), the only thing that software could be used to do is unlock Farook’s iPhone.

That’s explicitly a citation to this passage from Apple’s original motion.

The alternative—keeping and maintaining the compromised operating system and everything related to it—imposes a different but no less significant burden, i.e., forcing Apple to take on the task of unfailingly securing against disclosure or misappropriation the development and testing environments, equipment, codebase, documentation, and any other materials relating to the compromised operating system. Id. ¶ 47. Given the millions of iPhones in use and the value of the data on them, criminals, terrorists, and hackers will no doubt view the code as a major prize and can be expected to go to considerable lengths to steal it, risking the security, safety, and privacy of customers whose lives are chronicled on their phones.

In pointing to that passage, DOJ ignored the first passage in the Apple motion that addresses the danger of hackers: one that notes the government itself can’t keep its secrets safe as best exemplified by the Office of Personnel Management hack.

Since the dawn of the computer age, there have been malicious people dedicated to breaching security and stealing stored personal information. Indeed, the government itself falls victim to hackers, cyber-criminals, and foreign agents on a regular basis, most famously when foreign hackers breached Office of Personnel Management databases and gained access to personnel records, affecting over 22 million current and former federal workers and family members.

By arguing that Apple can keep its secrets safe while ignoring the evidence that the government itself can’t, the government implicitly conceded that Apple is better at keeping secrets than the government.

Of course, it’s not that simple. That’s because the millions of private sector employees who play a role in the secretive functions have clearances too. They were also compromised in the OPM hack. Thus, by failing to keep its own secrets, the government has provided China a ready made dossier of information it can use to compromise all the private sector clearance holders, in addition to the government personnel.

Which is why — in addition to his comment that it was “not reasonable to draw such a conclusion [that hackers could not hack iPhones from the lock screen] based solely on publicly released exploits” — I find this passage from Apple Manager of User Privacy Erik Neuenschwander’s supplemental declaration, submitted to accompany Apple’s reply, to be rather pointed.

Thus, as noted in my initial declaration (ECF No. 16-33), the initial creation of GovtOS itself creates serious ongoing burdens and risks. This includes the risk that if the ability to install GovtOS got into the wrong hands, it would open a significant new avenue of attack, undermining the security protections that Apple has spent years developing to protect its customers.

There would also be a burden on the Apple employees responsible for designing and implementing GovtOS. Those employees, if identified, could themselves become targets of retaliation, coercion, or similar threats by bad actors seeking to obtain and use GovtOS for nefarious purposes. I understand that such risks are why intelligence agencies often classify the names and employment of individuals with access to highly sensitive data and information, like GovtOS. The government’s dismissive view of the burdens on Apple and its employees seems to ignore these and other practical implications of creating GovtOS.

From the briefing in this case, we know that Neuenschwander was part of the then-secret discussions about how to access Farook’s phone before DOJ started leaking to the press about an impending AWA order. That means he almost certainly has to have clearance (and may well deal with more sensitive discussions related to FISA orders). We also know that he would be involved in writing what he calls GovtOS. You would have to go no further than Neuenschwander to identify a person on whom China has sensitive information that would also have knowledge of FBiOS (though there are probably a handful of others).

So he’s not just talking about nameless employees when he talks about the burden of implementing this order. He’s talking about himself. Because of government negligence, his own private life has been exposed to China. And, in part because DOJ chose to conduct this fight publicly, his own role (which admittedly was surely known to China and other key US adversaries before this fight) has been made public in a way NSA’s own engineers never would be.

FBI’s request of Apple — particularly coupled with OPM’s negligence — makes people like Neuenschwander a target. Which is why, no matter how good Apple is at keeping their own secrets, that may not be sufficient to keeping this code safe.

Tuesday Morning: Changing the Tenor

Once in a while, I indulge in the musical equivalent of eating chocolate instead of a wholesome meal. I’ll listen to my favorite tenors on a continuous loop for an afternoon. I have a weakspot for Luciano Pavarotti and Franco Correlli, though the latter isn’t one of the Three Tenors.

Speaking of which, this video features a really bizarre event: the Three Tenors performing at Los Angeles’ Dodgers Stadium in 1994. Poppy and Barbara Bush are there in the audience, too. What a supremely odd venue! And yet these guys did a bang up job in such a huge, open space. Pavarotti’s Nessun Dorma at ~1:05 is my favorite cut, but it’s all fun.

Now let’s change the tenor…

Former Microsoft CEO Bill Gates sides with FBI against Apple
Gates isn’t the best salesman for this job, promoting compelled software. Given Gates’ role as technology adviser to Microsoft’s current CEO Satya Nadella, how persistently invasive Windows 10 is, and Microsoft software’s leaky history, Gates comes off as a soldato for USDOJ. Do read the article; it’s as if Gates was so intent on touting USDOJ’s line that he didn’t bother to read any details about USDOJ’s demands on Apple.

UPDATE — 10:25 AM EST — Poor Bill, so misunderstood, now backpedaling on his position about Apple’s compliance. This, from a Fortune 100 technology adviser…~shaking my head~

Gates talks out of the other side of his face on climate change
Unsurprisingly, Bill Gates also looks less than credible when he pleads with students for an ‘energy miracle’ to tackle climate change. This is shameless: first, guilt-tripping minors in high school, second for the blatant hypocrisy. The Bill and Melinda Gates Foundation continues to hold investments in ExxonMobil, BP, and Shell because of their yields. Not exactly a commitment to alternative energy there. How’s that investment strategy working for you now, Gates?

Fossil fuel-based industries: wall-to-wall bad news
Speaking of crappy investments in dirty hydrocarbons, conditions are just plain ugly.

Office of Personnel Management’s CIO steps down
Donna K. Seymour stepped down from her role, the second OPM management team member to leave after the massive hack of U.S. government personnel records. She was scheduled to appear before Congress this week; that hearing has now been canceled by House Oversight and Government Reform Committee chair Jason Chaffetz. Huh. That’s convenient. Wonder if she would have said something that reflected badly on a previous GOP administration? This bit from the linked article is just…well…

FBI Director James Comey called the hacks an “enormous breach,” saying his own data were stolen. U.S. authorities blamed China, which strongly denied the accusation before it said in December that it had arrested several “criminal” Chinese hackers connected to the breach.

Wow, I wonder what China could do if they had access to every U.S. government employees’ iPhone? Anybody asked Comey what kind of phone he carries?

That’s a wrap. I’m off to listen to something sung in a sweet tenor voice.

Government (and Its Expensive Contractors) Really Need to Secure Their Data Collections

Given two recent high profile hacks, the government needs to either do a better job of securing its data collection and sharing process, or presume people will get hurt because of it.

After the hackers Crackas With Attitude hacked John Brennan, they went onto hack FBI’s Deputy Director Mark Giuliano as well as a law enforcement portal run by the FBI. The hack of the latter hasn’t gotten as much attention — thus far, WikiLeaks has not claimed to have the data, but upon closer examination of the data obtained, it appears it might provide clues and contact information about people working undercover for the FBI.

Then, the hackers showed Wired’s Kim Zetter what the portal they had accessed included. Here’s a partial list:

Enterprise File Transfer Service—a web interface to securely share and transmit files.

Cyber Shield Alliance—an FBI Cybersecurity partnership initiative “developed by Law Enforcement for Law Enforcement to proactively defend and counter cyber threats against LE networks and critical technologies,” the portal reads. “The FBI stewards an array of cybersecurity resources and intelligence, much of which is now accessible to LEA’s through the Cyber Shield Alliance.”

IC3—“a vehicle to receive, develop, and refer criminal complaints regarding the rapidly expanding arena of cyber crime.”

Intelink—a “secure portal for integrated intelligence dissemination and collaboration efforts”

National Gang Intelligence Center—a “multi-agency effort that integrates gang information from local, state, and federal law enforcement entities to serve as a centralized intelligence resource for gang information and analytical support.”

RISSNET—which provides “timely access to a variety of law enforcement sensitive, officer safety, and public safety resources”

Malware Investigator—an automated tool that “analyzes suspected malware samples and quickly returns technical information about the samples to its users so they can understand the samples’ functionality.”

eGuardian—a “system that allows Law Enforcement, Law Enforcement support and force protection personnel the ability to report, track and share threats, events and suspicious activities with a potential nexus to terrorism, cyber or other criminal activity.”

While the hackers haven’t said whether they’ve gotten into these information sharing sites, they clearly got as far as the portal to the tools that let investigators share information on large networked investigations, targeting things like gangs, other organized crime, terrorists, and hackers. If hackers were to access those information sharing networks, they might be able to both monitor investigations into such networked crime groups, but also (using credentials they already hacked) to make false entries. And all that’s before CISA will vastly expand this info sharing.

Meanwhile, the Intercept reported receiving 2.5 years of recorded phone calls — amounting to 70 million recorded calls — from one of the nation’s largest jail phone providers, Securus. Its report focuses on proving that Securus is not defeat-listing calls to attorneys, meaning it has breached attorney-client privilege. As Scott Greenfield notes, that’s horrible but not at all surprising.

But on top of that, the Intercept’s source reportedly obtained these recorded calls by hacking Securus. While we don’t have details of how that happened, that does mean all those calls were accessible to be stolen. If Intercept’s civil liberties-motivated hacker can obtain the calls, so can a hacker employed by organized crime.

The Intercept notes that even calls to prosecutors were online (which might include discussions from informants). But it would seem just calls to friends and associates would prove of interest to certain criminal organizations, especially if they could pinpoint the calls (which is, after all, the point). As Greenfield notes, defendants don’t usually listen to their lawyers’ warnings — or those of the signs by the phones saying all calls will be recorded — and so they say stupid stuff to everyone.

So we tell our clients that they cannot talk about anything on the phone. We tell our clients, “all calls are recorded, including this one.”  So don’t say anything on the phone that you don’t want your prosecutor to hear.

Some listen to our advice. Most don’t. They just can’t stop themselves from talking.  And if it’s not about talking to us, it’s about talking to their spouses, their friends, their co-conspirators. And they say the most remarkable things, in the sense of “remarkable” meaning “really damaging.”  Lawyers only know the stupid stuff they say to us. We learn the stupid stuff they say to others at trial. Fun times.

Again, such calls might be of acute interest to rival gangs (for example) or co-conspirators who have figured out someone has flipped.

It’s bad enough the government left OPM’s databases insecure, and with it sensitive data on 21 million clearance holders.

But it looks like key law enforcement data collections are not much more secure.

Hacking John Brennan, Hacking OPM

In Salon, I’ve got my take on the hack of John Brennan’s AOL account by a 13-year old stoner.

While I think it sucks that WikiLeaks posted unredacted data on Brennan’s family, I’m not at all sympathetic to Brennan himself. After all he’s the guy who decided hacking his SSCI overseers would be appropriate. He’s one of the people who’ve been telling us we have no expectation of privacy in the kinds of data hackers obtained from Verizon — alternate phone number, account ID, password, and credit card information — for years.

But most of all, I think we should remember that Brennan left this data on an AOL server through his entire Obama Administration career, which includes 4 years of service as Homeland Security Czar, a position which bears key responsibility for cybersecurity.

Finally, this hack exposes the Director of the CIA exercising almost laughable operational security. The files appear to date from the period leading up to Brennan’s appointment as White House Homeland Security Czar, where a big part of Brennan’s job was to prevent hacks in this country. To think he was storing sensitive documents on an AOL server — AOL! — while in that role, really demonstrates how laughable are the practices of those who purport to be fighting hackers as the biggest threat to the country. For at least 6 years, the Homeland Security Czar, then the CIA Director — one of the key intelligence officials throughout the Obama Administration — left that stuff out there for some teenagers to steal.

Hacking is a serious problem in this country. Like Brennan, private individuals and corporations suffer serious damage when they get hacked (and the OPM hack of Brennan’s materials may be far more serious). Rather than really fixing the problem, the intelligence community is pushing to give corporations regulatory immunity in exchange for sharing information that won’t be all that useful.

A far more useful initial step in securing the country from really basic types of hacking would be for people like Brennan to stop acting in stupid ways, to stop leaving both their own and the public’s sensitive data in places where even stoned kids can obtain it, to provide a good object lesson in how to limit the data that might be available for malicious hackers to steal.

I would add, however, that there’s one more level of responsibility here.

As I noted in my piece, Brennan’s not the only one who got his security clearance application stolen recently. He is joined in that by 21 million other people, most of whom don’t have a key role in cybersecurity and counterintelligence. Most of those 21 million people haven’t even got official notice their very sensitive data got hacked by one of this country’s adversaries — not even those people who might be particularly targeted by China. Like Brennan, the families of those people have all been put at risk. Unlike Brennan, they didn’t get to choose to leave that data sitting on a server.

In fact, John Brennan and his colleagues have not yet put in place a counterintelligence plan to protect those 21 million people.

If it sucks that John Brennan’s kids got exposed by a hacker (and it does), then it sucks even more than people with far fewer protections and authority to fix things got exposed, as well.

John Brennan should focus on that, not on the 13 year old stoner who hacked his AOL account.

BREAKING: OPM and DOD (Claim They) Don’t Think Fingerprint Databases Are All That Useful

In the most negative news dump released behind the cover of Pope Francis’ skirts, Office of Public Management just announced that rather than previous reports that 1.1 million people had had their fingerprints stolen from OPM’s databases, instead 5.6 million have.

Aside from the big numbers involved, there are several interesting aspects of this announcement.

First, it seems OPM had an archive of records on 4.5 million people, including fingerprint data, they hadn’t realized was there at first.

As part of the government’s ongoing work to notify individuals affected by the theft of background investigation records, the Office of Personnel Management and the Department of Defense have been analyzing impacted data to verify its quality and completeness. During that process, OPM and DoD identified archived records containing additional fingerprint data not previously analyzed.

If, as it appears, this means OPM had databases of key counterintelligence lying around it wasn’t aware of (and therefore wasn’t using), it suggests Ron Wyden’s concern that the government is retaining data unnecessarily is absolutely correct.

Rather bizarrely, upon learning that someone found and went through archived databases to obtain more fingerprint data, “federal experts” claim that “as of now, the ability to misuse fingerprint data is limited.”

As EFF just revealed, since February the FBI has been busy adding fingerprint data it gets when it does when it does background checks on job applicants into its Next Generation Identification database.

Being a job seeker isn’t a crime. But the FBI has made a big change in how it deals with fingerprints that might make it seem that way. For the first time, fingerprints and biographical information sent to the FBI for a background check will be stored and searched right along with fingerprints taken for criminal purposes.

The change, which the FBI revealed quietly in a February 2015 Privacy Impact Assessment (PIA), means that if you ever have your fingerprints taken for licensing or for a background check, they will most likely end up living indefinitely in the FBI’s NGI database. They’ll be searched thousands of times a day by law enforcement agencies across the country—even if your prints didn’t match any criminal records when they were first submitted to the system.

This is the first time the FBI has allowed routine criminal searches of its civil fingerprint data. Although employers and certifying agencies have submitted prints to the FBI for decades, the FBI says it rarely retained these non-criminal prints. And even when it did retain prints in the past, they “were not readily accessible or searchable.” Now, not only will these prints—and the biographical data included with them—be available to any law enforcement agent who wants to look for them, they will be searched as a matter of course along with all prints collected for a clearly criminal purpose (like upon arrest or at time of booking).

In its PIA explaining the move, FBI boasts that this will serve as “an ‘ongoing’ background check that permits employers, licensors, and other authorized entities to learn of criminal conduct by a trusted individual.” To suggest that a massive database of fingerprints can provide the FBI real-time updates on certain behaviors, but pretend it wouldn’t serve a similar purpose to the Chinese, defies logic. Heck, why is OPM keeping fingerprint information if it can’t be used? And of course, all that assumes none of the 5.6 million people affected has a fingerprint-authenticating iPhone.

Of course this can be used, otherwise the Chinese wouldn’t have gone out of their way to get it!

But OPM’s claim that the Chinese just went out of their way to get that fingerprint data for no good reason provides the agency with a way to delay notification while FBI, DHS, DOD and “other members of the Intelligence Community” come up with ways to limit the damage of this.

If, in the future, new means are developed to misuse the fingerprint data, the government will provide additional information to individuals whose fingerprints may have been stolen in this breach.

After which OPM spends two paragraphs talking about the identity protection those whose identities have been stolen will get, as if that mitigates a huge counterintelligence problem.

It sure sounds like OPM is stalling on informing the people who’ve been exposed about how badly they’ve been exposed, under the incredible claim that databases of fingerprints aren’t all that useful.

Did the OPM Hack Fix Jack Goldsmith’s Anonymity Problem?

In a piece claiming “the most pressing problem the United States sees in its cyber relations with China [is] the widespread espionage and theft by China in U.S. public and private digital networks,” Jack Goldsmith argues any cyber agreement with China won’t be all that useful because we would never be able to verify it.

I still adhere what I once wrote in response to this: “in the absence of decent verification, we cannot be confident that transparency measures are in fact transparent, or that revealed doctrine is actual doctrine.  Nor can norms get much purchase in a world without serious attribution and verification; anonymity is a norm destroyer.”

Goldsmith says this in a piece that claims to adopt Sanger’s expressed concerns about the proposed deal and what it won’t cover. Here’s Sanger:

But it seems unlikely that any deal coming out of the talks would directly address the most urgent problems with cyberattacks of Chinese origin, according to officials who spoke on the condition of anonymity to describe continuing negotiations.

Most of those attacks have focused on espionage and theft of intellectual property. The rules under discussion would have done nothing to stop the theft of 22 million personal security files from the Office of Personnel Management, which the director of national intelligence, James R. Clapper Jr., recently told Congress did not constitute an “attack” because it was intelligence collection — something the United States does, too.

The agreement being negotiated would also not appear to cover the use of tools to steal intellectual property, as the Chinese military does often to bolster state-owned industries, according to an indictment of five officers of the People’s Liberation Army last year. And it is not clear that the rules would prohibit the kind of attack carried out last year against Sony Pictures Entertainment, for which the United States blamed North Korea. That attack melted down about 70 percent of Sony’s computer systems.

So Sanger quotes James Clapper saying he doesn’t consider OPM an attack (for good reason), but says that’s one of the most urgent concerns about Chinese hacking. Clapper’s response doesn’t seem to substantiate Sanger’s claim about the centrality of that as a concern, though I think it is a huge concern. I’ll come back to this.

Then Sanger — in a piece that once again repeats the shitty reporting that last year’s indictment showed the theft of IP to bolster state-owned industries (see this post, but I’m working on a follow-up) — says the agreement won’t cover IP theft. Finally, Sanger says that the agreement might not cover a Sony pictures hack, which the Chinese haven’t been accused of doing, so why would that be important in an agreement with the Chinese?

That last bit is where Goldsmith actually doesn’t adopt what Sanger has laid out. Indeed, he seems to say the agreement is about Sony type hacks.

[T]he ostensible “agreement” won’t have anything to do with the most pressing problem the United States sees in its cyber relations with China – the widespread espionage and theft by China in U.S. public and private digital networks.  The negotiation is mainly about cyberattacks (cyber operations that disrupt, destroy, degrade, or manipulate information on adversary networks) and not about cyberexpoitation (cyber operations involving theft, intelligence-gathering, and the like on digital networks).

The Sony hack certainly disrupted and destroyed the film studio’s networks, even while exposing a bunch of embarrassing intelligence. But thus far, we’re proceeding as if China hasn’t done that to “us” (to the extent a Japanese owned film studio counts as the US), North Korea has. We don’t even ever talk about whether China, in addition to robbing the F-35 program blind, also sabotaged it;  I remain agnostic about whether the US defense industry needed China’s help to sabotage the program, but China definitely had the persistence in networks to sabotage key parts that have since proven faulty. Plus, we’re taking it on faith that claims that the NYSE/United outages that happened on the same day are really unrelated, and curiously we’re not talking about the serial air travel outages we’ve experienced of late (after United, the FAA and then American went down because of “software problems”). I would suggest that the IC may have reason to have urgent concern about China’s ability and willingness to sabotage us, above and beyond its IP theft and intelligence theft, but if it does it’s not telling us.

But let’s take a step back. Since when did we conflate IP theft and the OPM hack? Those are different problems, and I’d really love to have a discussion — which surely wouldn’t happen with any government officials in any unclassified forum — whether the OPM hack is now considered a more urgent threat than serial Chinese IP theft, or whether Clapper is being honest in consistently dismissing it as similar behavior to what we do. Sure, IP theft used to be the most urgent issue, but did that change when China absconded with a database of much of our clearance data? The relative urgency of the two seems an utterly critical thing to understand, given that China pwned us in the OPM hack, and now 3 months after discovering that, we’re signing a cyber agreement.

All the more so given that the OPM hack goes right to the issue of anonymity though not, perhaps, verifiability.

In his piece, Goldsmith is a bit more trusting of the Clapper claim — which I laid out here — that we lost technical accesses in the wake of the Snowden leaks. I think that may well be the case, but it’s just as likely that’s disinformation, either for Congress in advance of the Xi Jinping visit, or for the Chinese. Goldsmith presents that as one more reason why we can’t verify any agreement, and therefore it will be largely worthless.

But does it matter that the OPM hack created symmetry in transparency of personnel (which is different from technical accesses) between China and the US? Does it matter that, with the OPM hack, the Chinese largely replicated our ability to create fingerprints using XKS, and through that figure out who in China was doing what?

That is, we may not have full attribution ability right now — in Clapper’s description it sounded like we could consistently ID tools and persona, but not necessarily tie that persona back to the Chinese state, though, again, that my have been disinformation. But both the US (through XKS) and China (through OPM) have achieved a kind of transparency in personnel.

Which brings me to my central question, in response to Goldsmith’s claim this agreement is pretty meaningless because of the attribution and verification problems. He may well be right it will be a mostly symbolic agreement (though if we move towards norms that may be a positive step).

But until we tease out the real interaction of the old problem — the IP theft — with the new one — that China has our intelligence community by the balls, and until we develop more certainty that some other acts of sabotage aren’t, in fact, cyberattacks, I’m not sure we’re really understanding the dynamics behind the agreement.

Just as importantly, it seems, we need to understand what a new kind of personnel transparency affects our expectations about verification or trust in cyberspace. I don’t know the answer to whether this kind of symmetry chances the considerations on verification or not, but it does seem a relevant question.

National Counterintelligence Director Evanina about OPM Breach: “Not My Job”

I’ve been tracking Ron Wyden’s efforts to learn whether the National Counterintelligence and Security Center had anticipated how much of a counterintelligence bonanza the Office of Personnel Management’s databases would be. Wyden sent National Counterintelligence Executive William Evanina a set of questions last month.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

Evanina just responded. His answer to the first two questions was basically, “Not my job.”

In response to the first two questions, under the statutory structure established by the Federal Information Security Management Act of 2002 (FISMA), as amended, executive branch oversight of agency information security policies and practices rests with the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS). For agencies with Inspectors General (IG) appointed under the Inspector General Act of 1978 (OPM is one of those agencies), independent annual evaluations of each agency’s adherence to the instructions of OMB and DHS are carried out by the agency’s IG or an independent external auditor chosen by the agency’s IG. These responsibilities are discussed in detail in OMB’s most recent annual report to Congress on FISMA implementation. The statutory authorities of the National Counterintelligence Executive, which is part of the NCSC, do not include either identifying information technology (IT) vulnerabilities to agencies or providing recommendations on how to secure their IT systems.

Of course, this doesn’t really answer the question, which is whether Evanina — or the NCSC generally — had identified OPM’s database full of clearance information as a critical CI asset. Steven Aftergood has argued it should have been, according to the Office of Director of National Intelligence’s definition if not bureaucratic limits. Did the multiple IG reports showing OPM was vulnerable, going back to 2009 and continuing until this year, register on NCSC’s radar?

I’m guessing, given Evanina’s silence on that issue, the answer is no.

No, the folks in charge of CI didn’t notice that this database of millions of clearance holders’ records might be a juicy intelligence target. Not his job to notice.

Evanina’s response to the third question — whether the government really had to keep records going back to Reagan’s second term — was no more satisfying.

[T]he timelines for retention of personnel security files were established by the National Archives General Records Schedule 18, Item 22 (September 2014). While it is possible that we may incur certain vulnerabilities with the retention of background investigation information over a significant period of time, its retention has value for personnel security purposes. The ability to assess the “whole person” over a long period of time enables security clearance adjudicators to identify and address any issues (personnel security or counterintelligence-related) that may exist or may arise.

In other words, just one paragraph after having said it’s not his job to worry about the CI implications of keeping 21 million clearance holders’ records in a poorly secured database, the Counterintelligence Executive said the government needed to keep those records (because the government passed a policy deciding they’d keep those just a year ago) for counterintelligence purposes.

In a statement on the response, Wyden, like me, reads it as Evanina insisting this key CI role is not his job. To which Wyden adds, putting more data in the hands of these insecure agencies under CISA would only exacerbate this problem.

The OPM breach had a huge counterintelligence impact and the only response by the nation’s top counterintelligence officials is to say that it wasn’t their job. This is a bureaucratic response to a massive counter-intelligence failure and unworthy of individuals who are being trusted to defend America. While the National Counterintelligence and Security Center shouldn’t need to advise agencies on how to improve their IT security, it must identify vulnerabilities so that the relevant agencies can take the necessary steps to secure their data.

The Senate is now trying to respond to the OPM hack by passing a bill that would lead to more personal information being shared with these agencies. The way to improve cybersecurity is to ensure that network owners take responsibility for plugging security holes, not encourage the sharing of personal information with agencies that can’t protect it adequately.

Somehow, the government kept a database full of some of its most important secrets on an insecure server, and the guy in charge of counterintelligence can only respond that we had to do that to serve counterintelligence purposes.