There are two things Cy Vance (writing with Paris’ Chief Prosecutor, the City of London Policy Commissioner, and Javier Zaragoza, Spain’s High Court) doesn’t mention in his op-ed calling for back doors in Apple and Google phones.
iPhone theft and bankster crime.
The former is a huge problem in NYC, with 8,465 iPhone thefts in 2013, which made up 18% of the grand larcenies in the city. The number came down 25% (and the crime started switching to Samsung products) last year, largely due to Apple’s implementation of a Kill Switch, but that still leaves 6,000 thefts a year — as compared to the 74 iPhones Vance says NYPD wasn’t able to access (he’s silent about how many investigations, besides the 3 he describes, that actually thwarted; Vance ignores default cloud storage completely in his op-ed). The numbers will come down still further now that Apple has made the Kill Switch (like encryption) a default setting on the iPhone 6. But there are still a lot of thefts, which can not only result in a phone being wiped and resold, but also an identity stolen. Default encryption will protect against both kinds of crime. In other words, Vance just ignores how encryption can help to prevent a crime that has been rampant in NYC in recent years.
Bankster crime is an even bigger problem in NYC, with a number of the worlds most sophisticated Transnational Crime Organizations, doing trillions of dollars of damage, headquartered in the city. These TCOs are even rolling out their very own encrypted communication system, which Elizabeth Warren fears may eliminate the last means of holding them accountable for their crimes. But Vance — one of the prosecutors that should be cracking down on this crime — not only doesn’t mention their special encrypted communication system, but he doesn’t mention their crimes at all.
There are other silences and blind spots in Vance’s op-ed, too. The example he starts with — a murder in Evanston, not any of the signees’ jurisdiction — describes two phones that couldn’t be accessed. He remains silent about the other evidence available by other means, such as via the cloud. Moreover, he assumes the evidence will be in the smart phone, which may not be the case. Moreover, it’s notable that Vance focuses on a black murder victim, because racial disparities in policing, not encryption, are often a better explanation for why murders of black men remain unsolved 2 months later. Given NYPD’s own crummy record at investigating and solving the murders of black and Latino victims, you’d think Vance might worry more about having NYPD reassign its detectives accordingly than stripping the privacy of hundreds of thousands.
Then Vance goes on to describe how much smart phone data they’re still getting.
In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.
Again, Vance is silent about whether this data is coming off the phone itself, or off the cloud. But it is better proof that investigators are still getting the data (perhaps via the cloud storage he doesn’t want to talk about?), not that they’re being thwarted.
Like Jim Comey, Vance claims to want to have a discussion weighing the “marginal benefits of full-disk encryption and the need for local law enforcement to solve and prosecute crimes.” But his op-ed is so dishonest, so riven with obvious holes, it raises real questions about both his honesty and basic logic.
When Michael Chertoff made the case against back doors, he noted that if the government moved to require back doors, it would leave just the bad guys with encrypted communications.
The second thing is that the really bad people are going to find apps and tools that are going to allow them to encrypt everything without a back door. These apps are multiplying all the time. The idea that you’re going to be able to stop this, particularly given the global environment, I think is a pipe dream. So what would wind up happening is people who are legitimate actors will be taking somewhat less secure communications and the bad guys will still not be able to be decrypted.
I doubt he had the Transnational Crime Organizations on Wall Street in mind when he talked about the bad guys “still not be able to be decrypted.”
But HSBC, JP Morgan Chase, Citi, Deutsche Bank, Goldman Sachs and the other big banks supporting Symphony Communications — a secure cloud based communications system about to roll out — are surely among the world’s most hard-core recidivists, and their crime does untold amount of damage to ordinary people around the globe.
Which is why I’m so amused that Elizabeth Warren has made a stink about the imminent rollout of Symphony and whether it will affect banks’ ability to evade what scant law enforcement might be thrown their way.
I have  concerns about how the biggest banks use of this new communications tool will impact compliance and enforcement by the Department of Justice [Warren sent versions of the letter to 6 regulators] at the federal level.
My concerns are exacerbated by Symphony’s publicly available descriptions of the new communications system, which appear to put companies on notice — with a wink and a nod — that they can use Symphony to reduce compliance and enforcement concerns. Symphony claims that “[i]n the past, communication tools designed for the financial services sector were limited in reach and effectiveness by strict regulatory compliance … We’re changing the communications paradigm” The company’s website boasts that it has special tools to “prevent government spying,” and “there are no backdoors,” and that “Symphony has designed a specific set of procedures to guarantee that data deletion is permanent.”
Warren is right to be concerned. These are serial conspiracists on a global scale, and (as Warren notes elsewhere) they’ve only been caught — to the extent that hand slaps count as being caught — when DOJ found the chat rooms in which they’ve colluded.
That said, the banks, too, have real reason to be concerned. The stated reason they give for pushing this project is Bloomberg’s spying on them — when they were using Bloomberg chat — for reporting reasons, which was revealed two years ago. The reference to government spying goes beyond US adversaries, though I’m sure both real adversaries, like China, and competitors, like the EU, are keeping watch on the banks to the extent they can. But the US has spied on the banks, too. As the spy agency did with Google, the NSA spied on SWIFT even though it also had a legal means to get that data. I wouldn’t be surprised if the rise in bank sanctions violations in recent years stemmed from completely necessary spying if you’re going to impose sanctions, but spying that would compromise the banks, too. Remember, too, that the Treasury Department has, at least as of recently, never complied with EO 12333’s requirement for minimization procedures to protect US persons, which would include banks.
And there have even been cases of hacker-insider trader schemes of late.
So banks are right to want secure communications. And while these banks are proven crooks — and should be every bit the worry to Jim Comey as ISIS’s crappier encryption program, if Comey believes in hunting crime — the banks should be reined in via other means, not by making them insecure.
If we’re going to pretend — and it is no more than make-believe — that the banks operate with integrity, then they need to have secure communications. But without that make-believe, a lot of the important fictions America tells itself about capitalism start to fall apart.
Which is my way of saying that the 6 regulators need to think through how they can continue to regulate recidivist crooks who have their own secure messaging system, but that the recidivist crooks probably need a secure messaging system (though having their own might be a stretch).
If Jim Comey is going to bitch and moan about criminals potentially exploiting access to encrypted communications, then he should start his conversation with the banks, not Apple. If he remains silent about this gang and their secure communications, then he needs to concede, once and for all, that actual humans need to have access to the same privilege of secure communications.
On this topic, see also District Sentinel’s piece on this.
In recent days, there have been reports that the same (presumed Chinese) hackers who stole vast amounts of data from the Office of Personnel Management have also hacked at least United Airlines and American. (Presuming the Chinese attribution is correct — and I believe it — I would be surprised if Chinese hackers hadn’t also tried to hack Delta, given that it has a huge footprint in Asia, including China; if that’s right and Delta managed to withstand the attack, we should find out how and why.)
Those hacks — and the presumption that the Chinese are stealing the data to flesh out their already detailed map of the activities of US intelligence personnel — have led a bunch of Cyber Information Sharing Act supporters (Susan Collins and Barb Mikulski have already voted for it, and Bill Nelson almost surely will, because he loves surveillance) to admit its inadequacy.
In recent months, hackers have infiltrated the U.S. air traffic control system, forced airlines to ground planes and potentially stolen detailed travel records on millions of people.
Yet the industry lacks strict requirements to report these cyber incidents, or even adhere to specific cybersecurity standards.
“There should be a requirement for immediate reporting to the federal government,” Sen. Susan Collins (R-Maine), who chairs the Appropriations subcommittee that oversees the Federal Aviation Administration (FAA), told The Hill.
“We need to address that,” agreed Sen. Bill Nelson (D-Fla.), the top Democrat on the Senate Commerce Committee.
“We need a two-way exchange of information so that when a threat is identified by the private sector, it’s shared with the government, and vice versa,” Collins added. “That’s the only way that we have any hope of stopping further breaches.”
That’s why, Nelson said, the airline industry needs mandatory, immediate reporting requirements.
“All the more reason for a cybersecurity bill,” he said.
But for years, Congress has been unsuccessful in its efforts.
Sen. Barbara Mikulski (D-Md.), the Senate Appropriations Committee’s top Democrat, tried three years ago to move a cyber bill that would have included rigid breach reporting requirements for critical infrastructure sectors, including aviation.
“We were blocked,” she told The Hill recently. “So it’s time for not looking at an individual bill, but one that’s overall for critical infrastructure.”
So now we have some Senators calling for heightened cybersecurity standards for cars, and different, hawkish Senators calling for heightened cybersecurity sharing (though they don’t mention security standards) for airlines. Bank regulators are already demanding higher standards from them.
And someday soon someone will start talking about mandating response time for operating system fixes, given the problems with Android updates.
Maybe the recognition that one after another industry requires not immunity, but an approach to cybersecurity that actually requires some minimal actions from the companies in question, ought to lead Congress to halt before passing CISA and giving corporations immunity and think more seriously about what a serious approach to our cyber problems might look like.
That said, note that the hawks in this story are still adopting what is probably an approach of limited use here. Indeed, the story is notable in that it cites a cyber contractor, JAS Global Advisors Jeff Schmidt, actually raising questions whether mandated info-sharing (with the government, not the public) would be all that effective.
If OPM has finally demonstrated the real impact of cyberattacks, then maybe it’s time to have a real discussion of what might help to keep this country safe — because simply immunizing corporations is not going to do it.
Wired’s hack-of-the-day story reports that researchers hacked a Tesla (unlike the Chrysler hack, it required access to the vehicle once, though the Tesla also has a browser vulnerability that might not require direct access).
Two researchers have found that they could plug their laptop into a network cable behind a Model S’ driver’s-side dashboard, start the car with a software command, and drive it. They could also plant a remote-access Trojan on the Model S’ network while they had physical access, then later remotely cut its engine while someone else was driving.
The story notes how much more proactive Tesla was in patching this problem than Chrysler was.
The researchers found six vulnerabilities in the Tesla car and worked with the company for several weeks to develop fixes for some of them. Tesla distributed a patch to every Model S on the road on Wednesday. Unlike Fiat Chrysler, which recently had to issue a recall for 1.4 million cars and mail updates to users on a USB stick to fix vulnerabilities found in its cars, Tesla has the ability to quickly and remotely deliver software updates to its vehicles. Car owners only have to click “yes” when they see a prompt asking if they want to install the upgrade.
In my understanding, Tesla was able to do this both because it responded right away to implement the fix, and because it had the technical ability to distribute the update in such a way that was usable for end users. Chrysler deserves criticism for the former (though at least according to Chrysler, it did start to work on a fix right away, it just didn’t implement it), but the latter is a problem that will take some effort to fix.
Which is one reason I think a better comparison with Tesla’s quick fix is Google’s delayed fix for the Stagefright vulnerability. As the researcher who found it explained, Google address the vulnerability internally immediately, just like Tesla did.
Google has moved quickly to reassure Android users following the announcement of a number of serious vulnerabilities.
The Google Stagefright Media Playback Engine Multiple Remote Code Execution Vulnerabilitiesallow an attacker to send a media file over a MMS message targeting the device’s media playback engine, Stagefright, which is responsible for processing several popular media formats.
Attackers can steal data from infected phones, as well as hijacking the microphone and camera.
Android is currently the most popular mobile operating system in the world — meaning that hundreds of millions of people with a smartphone running Android 2.2 or newer could be at risk.
Joshua Drake, mobile security expert with Zimperium, reports
A fully weaponized successful attack could even delete the message before you see it. You will only see the notification…Unlike spear-phishing, where the victim needs to open a PDF file or a link sent by the attacker, this vulnerability can be triggered while you sleep. Before you wake up, the attacker will remove any signs of the device being compromised and you will continue your day as usual – with a trojaned phone.
Zimperium say that “Google acted promptly and applied the patches to internal code branches within 48 hours, but unfortunately that’s only the beginning of what will be a very lengthy process of update deployment.”
But with Android the updates need to go through manufacturers, which creates a delay — especially given fairly crummy updating regimes by a number of top manufacturers.
The experience with this particular vulnerability may finally be pushing Android-based manufacturers to fix their update process.
It’s been 10 days since Zimperium’s Joshua Drake revealed a new Android vulnerabilitycalled Stagefright — and Android is just starting to recover. The bug allows an attacker to remotely execute code through a phony multimedia text message, in many cases without the user even seeing the message itself. Google has had months to write a patch and already had one ready when the bug was announced, but as expected, getting the patch through manufacturers and carriers was complicated and difficult.
But then, something unexpected happened: the much-maligned Android update system started to work. Samsung, HTC, LG, Sony and Android One have already announced pending patches for the bug, along with a device-specific patch for the Alcatel Idol 3. In Samsung’s case, the shift has kicked off an aggressive new security policy that will deploy patches month by month, an example that’s expected to inspire other manufacturers to follow suit. Google has announced a similar program for its own Nexus phones. Stagefright seems to have scared manufacturers and carriers into action, and as it turns out, this fragmented ecosystem still has lots of ways to protect itself.
I make this comparison for two reasons. One, if Google — the customers of which have the hypothetical ability to send out remote patches, even if they’ve long neglected that ability — still doesn’t have this fixed, it’s unsurprising that Chrysler doesn’t yet.
But some of the additional challenges that Chrysler has that Tesla has fewer of stem from the fragmented industry. Chrysler’s own timeline of its vulnerability describes a “third party” discovering the vulnerability (not the hackers), and a “supplier” fixing it.
In January 2014, through a penetration test conducted by a third party, FCA US LLC (“FCA US”) identified a potential security vulnerability pertaining to certain vehicles equipped with RA3 or RA4 radios.
A communications port was unintentionally left in an open condition allowing it to listen to and accept commands from unauthenticated sources. Additionally, the radio firewall rules were widely open by default which allowed external devices to communicate with the radio. To date, no instances related to this vulnerability have been reported or observed, except in a research setting.
The supplier began to work on security improvements immediately after the penetration testing results were known in January 2014.
But it’s completely unclear whether that “third party” is the “supplier” in question. Which means it’s unclear whether this was found in the supplier’s normal testing process or in something else.
One reason cars are particularly difficult to test are because so many different suppliers provide parts which don’t get tested (or even adequately specced) in an integrated fashion.
Then, if you need to fix something you can’t send out over a satellite or Internet network, you’re dealing with the — in many cases — archaic relationships car makers have with dealers, not to mention the limitations of dealer staff and equipment to make the fix.
I don’t mean to excuse the automotive industry — they’re going to have to fix these problems (and the same problems lie behind fixing some of the defects tied to code that doesn’t stem from hacks, too, such as Toyota’s sudden acceleration problem).
It’s worth noting, however, how simplified supply and delivery chains make fixing a problem a lot easier for Tesla than it is for a number of other entities, both in and outside of the tech industry.
UPDATE — 4:30 PM EDT —
Hey, it’s Rayne here, adding my countervailing two cents (bitcoins?) to the topic after Marcy and I exchanged a few emails about this topic. I have a slightly different take on the situation since I’ve done competitive intelligence work in software, including open source models like Android.
Comparing Fiat Chrysler’s and Google’s Android risks, the size and scale of the exposures are a hell of a lot different. There are far more Android devices exposed than Chrysler car models at risk — +1 billion Android devices shipped annually around the globe as of 4Q2014.
Hell, daily activations of Android devices in 2013 were 1.2 million devices per day — roughly the same number as all the exposed Chrysler vehicles on the road, subject to recall.
Google should have a much greater sense of urgency here due to the size of the problem.
Yet chances of a malware attack on an Android device actually causing immediate mortal threat to one or more persons is very low, compared to severity of Chrysler hack. Could a hacker tinker with household appliances attached via Android? It’s possible — but any outcome now is very different from a hacker taking over and shutting down a vehicle operating at high speed in heavy traffic, versus shutting off a Phillips remote-controlled Hue lamp or a Google Nest thermostat, operating in the Internet of Things. The disparity in annoyance versus potential lethality may explain why Google hasn’t acted as fast as Tesla — but it doesn’t explain at all why Chrysler didn’t handle announcing their vulnerability differently. Why did they wait nearly a year to discuss it in public? Continue reading
Dianne Feinstein just gave a long speech on the Senate floor supporting the Cyber Information Sharing Act.
She listed off a list of shocking hacks that happened in the last year or so — though made no effort (or even claim) that CISA would have prevented any of them.
She listed some of the 56 corporations and business organizations that support the bill.
Most interestingly, she boasted that yesterday she received a letter from GM supporting the bill. We should pass CISA, Feinstein suggests, because General Motors, on August 4, 2015, decided to support the bill.
I actually think that’s reason to oppose the bill.
As I have written elsewhere — most recently this column at the DailyDot — one of my concerns about the bill is the possibility that by sharing data under the immunity afforded by the bill, corporations might dodge liability where it otherwise might serve as necessary safety and security leverage.
Immunizing corporations may make it harder for the government to push companies to improve their security. As Wyden explained, while the bill would let the government use data shared to prosecute crimes, the government couldn’t use it to demand security improvements at those companies. “The bill creates what I consider to be a double standard—really a bizarre double standard in that private information that is shared about individuals can be used for a variety of non-cyber security purposes, including law enforcement action against these individuals,” Wyden said, “but information about the companies supplying that information generally may not be used to police those companies.”
Financial information-sharing laws may illustrate why Wyden is concerned. Under that model, banks and other financial institutions are obligated to report suspicious transactions to the Treasury Department, but, as in CISA, they receive in return immunity from civil suits as well as consideration in case of sanctions, for self-reporting. “Consideration,” meaning that enforcement authorities take into account a financial institution’s cooperation with the legally mandated disclosures when considering whether to sanction them for any revealed wrongdoing. Perhaps as a result, in spite of abundant evidence that banks have facilitated crimes—such as money laundering for drug cartels and terrorists—the Department of Justice has not managed to prosecute them. When asked during her confirmation hearing why she had not prosecuted HSBC for facilitating money laundering when she presided over an investigation of the company as U.S. Attorney for the Eastern District of New York, Attorney General Loretta Lynch said there was not sufficient “admissible” evidence to indict, suggesting they had information they could not use.
In the same column, I pointed out the different approach to cybersecurity — for cars at least — of the SPY Act — introduced by Ed Markey and Richard Blumenthal — which affirmatively requires certain cybersecurity and privacy protections.
Increased attention on the susceptibility of networked cars—heightened by but not actually precipitated by the report of a successful remote hack of a Jeep Cherokee—led two other senators, Ed Markey and Richard Blumenthal, to adopt a different approach. They introduced the Security and Privacy in Your Car Act, which would require privacy disclosures, adequate cybersecurity defenses, and additional reporting from companies making networked cars and also require that customers be allowed to opt out of letting the companies collect data from their cars.
The SPY Car Act adopts a radically different approach to cybersecurity than CISA in that it requires basic defenses from corporations selling networked products. Whereas CISA supersedes privacy protections for consumers like the Electronic Communications Privacy Act, the SPY Car Act would enhance privacy for those using networked cars. Additionally, while CISA gives corporations immunity so long as they share information, SPY Car emphasizes corporate liability and regulatory compliance.
I’m actually not sure how you could have both CISA and SPY Act, because the former’s immunity would undercut the regulatory limits on the latter. (And I asked both Markey and Blumenthal’s offices, but they blew off repeated requests for an answer on this point.)
Which brings me back to GM’s decision — yesterday!!! — to support CISA.
The hackers that remotely hacked a car used a Jeep Cherokee. But analysis they did last year found the Cadillac Escalade to be the second most hackable car among those they reviewed (and I have reason to believe there are other GM products that are probably even more hackable).
So … hackers reveal they can remotely hack cars on July 21; Markey introduced his bill on the same day. And then on August 4, GM for the first time signs up for a bill that would give them immunity if they start sharing data with the government in the name of cybersecurity.
Now maybe I’m wrong in my suspicion that CISA’s immunity would provide corporations a way to limit their other liability for cybersecurity so long as they had handed over a bunch of data to the government, even if it incriminated them.
But we sure ought to answer that question before we go immunizing corporations whose negligence might leave us more open to attack.
Sheldon Whitehouse just attempted (after 1:44) to rebut an epic rant from John McCain (at 1:14) in which the Arizona Senator suggested anyone who wanted to amend the flawed Cyber Intelligence Sharing Act wasn’t serious about national security.
Whitehouse defended his two amendments first by pointing out that McCain likes and respects the national security credentials of both his co-sponsors (Lindsey Graham and Max Blunt).
Then Whitehouse said, “I believe both of the bills [sic] have now been cleared by the US Chamber of Commerce, so they don’t have a business community objection.”
Perhaps John McCain would be better served turning himself purple (really! watch his rant!) attacking the very notion that the Chamber of Commerce gets pre-veto power over a bill that (according to John McCain) is utterly vital for national security.
Even better, maybe John McCain could turn himself purple suggesting that the Chamber needs to step up to the plate and accept real responsibility for making this country’s networks safer, rather than just using our cybersecurity problems as an opportunity to demand immunity for yet more business conduct.
If this thing is vital for national security — this particular bill is not, but McCain turned himself awfully purple — then the Chamber should just suck it up and meet the requirements to protect the country decided on by the elected representatives of this country.
Yet instead, the Chamber apparently gets to pre-clear a bill designed to spy on the Chamber’s customers.
Most outlets that commented on DHS’ response to Al Franken’s questions about CISA focused on their concerns about privacy.
The authorization to share cyber threat indicators and defensive measures with “any other entity or the Federal Government,” “notwithstanding any other provision of law” could sweep away important privacy protections, particularly the provisions in the Stored Communications Act limiting the disclosure of the content of electronic communications to the government by certain providers. (This concern is heightened by the expansive definitions of cyber threat indicators and defensive measures in the bill. Unlike the President’s proposal, the Senate bill includes “any other attribute of a cybersecurity threat” within its definition of cyber threat indicator and authorizes entities to employ defensive measures.)
To require sharing in “real time” and “not subject to any delay [or] modification” raises concerns relating to operational analysis and privacy.
First, it is important for the NCCIC to be able to apply a privacy scrub to incoming data, to ensure that personally identifiable information unrelated to a cyber threat has not been included. If DHS distributes information that is not scrubbed for privacy concerns, DHS would fail to mitigate and in fact would contribute to the compromise of personally identifiable information by spreading it further. While DHS aims to conduct a privacy scrub quickly so that data can be shared in close to real time, the language as currently written would complicate efforts to do so. DHS needs to apply business rules, workflows and data labeling (potentially masking data depending on the receiver) to avoid this problem.
None of those outlets noted that DOJ’s Inspector General cited privacy concerns among the reasons why private sector partners are reluctant to share data with FBI.
So the limited privacy protections in CISA are actually a real problem with it — one changes in a manager’s amendment (the most significant being a limit on uses of that data to cyber crimes rather than a broad range of felonies currently in the bill) don’t entirely address.
But I think this part of DHS’ response is far more important to the immediate debate.
Finally the 90-day timeline for DHS’s deployment of a process and capability to receive cyber threat indicators is too ambitious, in light of the need to fully evaluate the requirements pertaining to that capability once legislation passes and build and deploy the technology. At a minimum, the timeframe should be doubled to 180 days.
DHS says the bill is overly optimistic about how quickly a new cybersharing infrastructure can be put in place. I’m sympathetic with their complaint, too. After all, if it takes NSA 6 months to set up an info-sharing infrastructure for the new phone dragnet created by USA Freedom Act, why do we think DHS can do the reverse in half the time?
Especially when you consider DHS’ concerns about the complexity added because CISA permits private sector entities to share with any of a number of government agencies.
Equally important, if cyber threat indicators are distributed amongst multiple agencies rather than initially provided through one entity, the complexity–for both government and businesses–and inefficiency of any information sharing program will markedly increase; developing a single, comprehensive picture of the range of cyber threats faced daily will become more difficult. This will limit the ability of DHS to connect the dots and proactively recognize emerging risks and help private and public organizations implement effective mitigations to reduce the likelihood of damaging incidents.
DHS recommends limiting the provision in the Cybersecurity Information Sharing Act regarding authorization to share information, notwithstanding any other provision of law, to sharing through the DHS capability housed in the NCCIC.
Admittedly, some of this might be attributed to bureaucratic turf wars — albeit turf wars that those who’d prefer DHS do a privacy scrub before FBI or NSA get the data ought to support. But DHS is also making a point about building complexity into a data sharing portal that recreates one that already exists that has less complexity (as well as some anonymizing and minimization that might be lost under the new system). That complexity is going to make the whole thing less secure, just as we’re coming to grips with how insecure government networks are. It’s not clear, at all, why a new portal needs to be created, one that is more complex and involves agencies like the Department of Energy — which is cybersprinting backwards on its own security — at the front end of that complexity, one that lacks some safeguards that are in the DHS’ current portal.
More importantly, that complexity, that recreation of something that already exists — that’s going to take six months of DHS’s time, when it should instead be focusing on shoring up government security in the wake of the OPM hack.
Until such time as Congress wants to give the agencies unlimited resources to focus on cyberdefense, it will face limited resources and with those limited resources some real choices about what should be the top priority. And while DHS didn’t say it, it sure seems to me that CISA would require reinventing some wheels, and making them more complex along the way, at a time when DHS (and everyone in government focused on cybersecurity) have better things to be doing.
Congress is already cranky that the Administration took
a month two months to cybersprint middle distance run in the wake of the OPM hack. Why are they demanding DHS spend 6 more months recreating wheels before fixing core vulnerabilities?
Way down in the second-to-last paragraph of this NYT piece claiming the US will retaliate against China for the OPM hack, national security reporter David Sanger makes this claim about the hack, about experts affiliated with an agency that aspires to “Collect it all.”
Instead, the goal was espionage, on a scale that no one imagined before.
He follows it — he ends the entire article — with uncritical citation of this statement from a senior intelligence official.
“This is one of those cases where you have to ask, ‘Does the size of the operation change the nature of it?’ ” one senior intelligence official said. “Clearly, it does.”
Several paragraphs earlier, the reporter who did a lot of the most important work exposing the first-of-its-type StuxNet attack makes this claim. (NYLibertarian noted this earlier today.)
The United States has been cautious about using cyberweapons or even discussing it.
In other words, built into this story, written by a person who knows better, is a fiction about the US’ own aggressive spying and cyberwar. Sanger even suggests that the sensors we’ve got buried in Chinese networks exist solely to warn of attacks, and not to collect information just like that which China stole from OPM.
So if someone creating either a willful or lazy fiction also says this …
That does not mean a response will happen anytime soon — or be obvious when it does. The White House could determine that the downsides of any meaningful, yet proportionate, retaliation outweigh the benefits, or will lead to retaliation on American firms or individuals doing work in China. President Obama, clearly seeking leverage, has asked his staff to come up with a more creative set of responses.
… We’d do well to ask whether this is nothing more than propaganda, an effort to dissipate calls for a more aggressive response from Congress and others.
There is, however, one other underlying potential tension here. Yesterday, Aram Roston explained why some folks who work at NSA may be even more dissatisfied then they were when a contractor exposed their secrets for the world to see.
Employees at the National Security Agency complain that the director, Adm. Michael Rogers, is neglecting the intelligence agency in favor of his other job, running the military’s Cyber Command, three sources with deep knowledge of the NSA have told BuzzFeed News.
“He’s spending all his time at CYBERCOM,” one NSA insider said. “Morale is bad because of a lack of leadership.” A second source, who is close to the agency, agreed that employees are complaining that Rogers doesn’t seem to focus on leading the agency. A third said “there is that vibe going on. But I don’t know if it’s true.”
[O]ne of the NSA sources said Rogers appears to be focusing on CYBERCOM not just because the new organization is growing rapidly but also because it has a more direct mission and simpler military structure than the complex and scandal-ridden NSA in its post-Snowden era. That makes focusing on CYBERCOM easier, that source said, “than trying to redesign the National Security Agency.”
If true (note one of Roston’s sources suggests it may not be), it suggests one of the most important advisors on the issue of how to respond to China’s pawning the US is institutionally limiting his focus to his offensive role, not on his information collection (to say nothing of defensive) role. So if Roston’s sources are correct, we are in a very dangerous position, having a guy who is neglecting other potential options drive the discussion about how to respond to the OPM hack.
And there’s one detail in Sanger’s story that suggests Roston’s sources may be right — where Rogers describes “creating costs” for China, but those costs consist of an escalation of what is, in fact, a two-sided intelligence bonanza.
Admiral Rogers stressed the need for “creating costs” for attackers responsible for the intrusion,
Those of us without the weapons Rogers has at his disposal think of other ways of “creating costs” — of raising the costs on the front end, to make spies adopt a more targeted approach to their spying. Those methods, too, might be worth considering in this situation. If we’re going to brainstorm about how to deal with the new scenario where both the world’s major powers have adopted a bulk collection approach, maybe the entire world would be safer thinking outside the offensive weapon box?
Earlier this week, I noted that of the seven agencies that would automatically get cybersecurity data shared under the Cyber Information Sharing Act, several had similar or even worse cyberpreparedness than the Office of Personnel Management, from which China stole entire databases of information on our cleared personnel.
To make that argument, I used data from the FISMA report released in February. Since then — or rather, since the revelation of the OPM hack — the Administration has been pushing a “30 day sprint” to try to close the gaping holes in our security.
And there have been significant results (though note, the 30 day sprint turned into a 60 day middle distance run), particularly from OPM, Interior (which hosted OPM’s databases), and — two of those CISA data sharing agencies — DHS and Treasury.
Whoa! Check out that spike! Congratulations to those who worked hard to make this improvement.
But when you look at the underlying data, things aren’t so rosy.
We are apparently supposed to be thrilled that DOD now requires strong authentication for 58% of its privileged users (people like Edward Snowden), up 20% from the earlier 38%. Far more of DOD’s unprivileged users (people like Chelsea Manning?) — 83% — are required to use strong authentication, but that number declined from a previous 88%.
More remarkable, however, is that during a
30 day 60 day sprint to plug major holes, the Department of Energy also backslid, with strong authentication going from 34% to 11%. Admittedly, more of DoE’s privileged users must use strong authentication, but only 13% total.
DOJ (at least FBI and probably through them other parts of DOJ will receive this CISA information), too, backslid overall, though with a huge improvement for privileged users. And Commerce (another CISA recipient agency) also had a small regression for privileged users.
There may be explanations for this, such as that someone is being moved from a less effective two-factor program to a better one.
But it does trouble me that an agency as central to our national security as Department of Energy is regressing even during a period of concerted focus.
DOJ’s Inspector General just released a report on how well FBI’s cybersecurity initiative has been going. In general, it finds that the FBI has improved its ability to investigate cyberattacks.
But among the most significant challenges facing the FBI is in two-way information sharing with the private sector.
You might think that the Cyber Information Sharing Act — which after all, aims to increase information sharing between the private sector and those in government who will investigate it — would help that.
On one count it would: private sector entities interviewed by the IG were reluctant to cooperate with the FBI because of FOIA concerns.
During our interviews with private sector individuals, we found that private sector entities are reluctant to share information, such as PII or sensitive or proprietary information, with the government because of concerns about how that information could be used or the possibility that it could be publicly released under the Freedom of Information Act (FOIA).26 One private sector professional told us that he had declined to be interviewed by the OIG due to FOIA concerns.
CISA would include a blanket exception from FOIA — which is not necessarily a good thing, but should placate those who have these concerns.
But other private sector entities expressed concerns about the multiple uses to which shared data would be put. They cited Snowden disclosures showing data might be used for other purposes.
In addition, several private sector individuals discussed with us the challenges in collaborating with the FBI in a “post-Snowden” era. One private sector individual emphasized that Snowden has redefined how the private sector shares information with the United States government. We were told by private industry representatives and the FBI that, following the Snowden disclosures, private sector entities have become more reluctant to share information with the United States government because they are uncertain as to how the information they provide will be used and are concerned about balancing national security and individual privacy interests.
The recent reports on the use of cyber signatures for upstream Section 702 collection show that the NSA and FBI might be able to use signatures to search all traffic (though I suspect FISC has put more limitations on this practice than is currently known).
Just as troubling, however, are the broad permissions under CISA to use the data turned over under the law for prosecutions on a range of crimes. Right now, ECPA has provided tech companies — at least the ones that pushed back on NSLs demanding Internet data — a way to protect their customers from fishing expeditions. CISA is voluntary (though I can imagine many ways pressure would be brought to participate), but it does undermine that system of protections for customers.
When commenting on this, Jim Comey apparently added in proprietary information among the concerns of providers, along with the explicitly described “guard[ing] customer data.
The FBI Director has acknowledged private sector concerns related to proprietary information and the need to guard customer data and stated the FBI will do what it can to protect private sector privacy.27
Given NSA’s voracious use of any information it gets its hands on, and the broad permissions for information sharing in the bill, the protections for trade secrets may not be enough for the private sector, since it’s now clear the government, not just competitors, is exploiting trade secrets.
The IG ends this section urging the FBI to provide “appropriate assurances” about its handing of Personally Identifiable Information.
More generally, efforts to detect, prevent, and mitigate threats are hampered because neither the public nor private sector can see the whole picture.
The FBI Director further explained government lacks visibility into the many private networks maintained by companies in the United States, and the FBI “has information it cannot always share [with the private sector].” Consequently, each can see distinct types of cyber threats, but the information is not always visible to the other. We believe that the FBI should strengthen its outreach efforts to provide appropriate assurances regarding its handling of PII and proprietary information received from the private sector and work to reduce classification, where appropriate, of information in its possession in order to improve sharing and collaboration in both directions consistent with appropriate privacy and other limitations.
It is just my opinion, but I suspect CISA, as written, would further exacerbate concerns.
Finally, Inspector General Michael Horowitz’ statement releasing this report includes something not developed in the report itself, perhaps because it is a more recent concern: security of data shared with the federal government.
And, the FBI continues to face challenges relating to information sharing with private sector entities, in part because of concerns in the private sector about privacy and the security of sensitive information it shares with the government.
I’d be very interested in whether this stems just from trade secret concerns or from the concern that several of the agencies that would automatically get data shared with the government have their own cybersecurity challenges.