The Third Circuit just issued an important ruling holding that the Federal Trade Commission could sue Wyndham Hotels for having cybersecurity practices that did not deliver what their privacy policies promised. The opinion, written by Clinton appointee Thomas Ambro, laid out just how bad Wyndham’s cybersecurity was, even after it had been hacked twice. Ambro upheld the District Court’s decision that FTC could claim that Wyndham had unfairly exposed its customers.
The Federal Trade Commission Act prohibits “unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a). In 2005 the Federal Trade Commission began bringing administrative actions under this provision against companies with allegedly deficient cybersecurity that failed to protect consumer data against hackers. The vast majority of these cases have ended in settlement.
Wyndham’s as-applied challenge falls well short given the allegations in the FTC’s complaint. As the FTC points out in its brief, the complaint does not allege that Wyndham used weak firewalls, IP address restrictions, encryption software, and passwords. Rather, it alleges that Wyndham failed to use any firewall at critical network points, Compl. at ¶ 24(a), did not restrict specific IP addresses at all, id. at ¶ 24(j), did not use any encryption for certain customer files, id. at ¶ 24(b), and did not require some users to change their default or factory-setting passwords at all, id. at ¶ 24(f). Wyndham did not respond to this argument in its reply brief.
Wyndham’s as-applied challenge is even weaker given it was hacked not one or two, but three, times. At least after the second attack, it should have been painfully clear to Wyndham that a court could find its conduct failed the costbenefit analysis. That said, we leave for another day whether Wyndham’s alleged cybersecurity practices do in fact fail, an issue the parties did not brief. We merely note that certainly after the second time Wyndham was hacked, it was on notice of the possibility that a court could find that its practices fail the cost-benefit analysis.
The ruling holds out the possibility that threats of such actions by the FTC, which has been hiring superb security people in the last several years, might get corporations to adopt better cybersecurity and thereby make us all safer.
Which brings me to an issue I’ve been asking lots of lawyers about, without satisfactory answer, on other contexts.
The Cybersecurity Information Sharing Act prevents the federal government, as a whole, from bringing any enforcement actions against companies using cybersecurity threat indicators and defensive measures (or lack thereof!) turned over voluntarily under the act.
(D) FEDERAL REGULATORY AUTHORITY.—
(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this Act shall not be directly used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any entity, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.
(I) REGULATORY AUTHORITY SPECIFICALLY RELATING TO PREVENTION OR MITIGATION OF CYBERSECURITY THREATS.—Cyber threat indicators and defensive measures provided to the Federal Government under this Act may, consistent with Federal or State regulatory authority specifically relating to the prevention or mitigation of cybersecurity threats to information systems, inform the development or implementation of regulations relating to such information systems.
(II) PROCEDURES DEVELOPED AND IMPLEMENTED UNDER THIS ACT.—Clause (i) shall not apply to procedures developed and implemented under this Act.
Given this precedent, could Wyndham — and other negligent companies — pre-empt any such FTC actions simply by sharing promiscuously as soon as they discovered the hack?
Could FTC still sue Wyndham because it broke the law because it claimed its “operating defensive measures” were more than what they really were? Or would such suits be precluded — by all federal agencies — under CISA, assuming companies shared the cyberattack data? Or would CISA close off this new promising area to force companies to provide minimal cybersecurity?
Update: Paul Rosenzweig’s post on the FTC decision is worth reading. Like him, I agree that FTC doesn’t yet have the resources to be the police on this matter, though I do think they have the smarts on security, unlike most other agencies.
Steve Aftergood has posted a new directive from James Clapper mandating that Intelligence Community members warn individuals (be they corporate or natural persons) of a threat of death of seriously bodily harm.
This Directive establishes in policy a consistent, coordinated approach for how the Intelligence Community (IC) will provide warning regarding threats to specific individuals or groups of intentional killing, serious bodily injury, and kidnapping.
The fine print on it is quite interesting. For example, if you’re a drug dealer, someone involved in violent crime, or you’re at risk solely because you’re involved in an insurgency, the IC is not obliged to give you notice. Remember, the FBI did not alert members of Occupy Wall Street someone was plotting to assassinate them. Did they (then) not do so because they considered Occupy an “insurgency”? Would they consider them as one going forward?
But I’m most interested in what this should mean for hacking.
Here’s how the directive defines “seriously bodily harm.”
Serious Bodily Injury means an injury which creates a substantial risk of death or which causes serious, permanent disfigurement or impairment.
As I have noted, NSA has secretly defined “serious bodily harm” to include threat to property — that is, threats to property constitute threats of bodily harm.
If so, a serious hack would represent a threat of bodily harm (and under NSA’s minimization procedures they could share this data). While much of the rest of the Directive talks about how to accomplish this bureaucratically (and the sources and methods excuses for not giving notice), this should suggest that if a company like Sony is at risk of a major hack, NSA would have to tell it (and the Directive states that the obligation applies for US persons and non-US persons, though Sony is in this context a US person).
So shouldn’t this amount to a mandate for cybersharing, all without the legal immunity offered corporations under CISA?
A few days ago the WaPo published a story on the OPM hack, focusing (as some earlier commentary already has) on the possibility China will alter intelligence records as part of a way to infiltrate agents or increase distrust.
It’s notable because it relies on the Director of the National Counterintelligence and Security Center, Bill Evanina. The article first presents his comments about that nightmare scenario — altered records.
“The breach itself is issue A,” said William “Bill” Evanina, director of the federal National Counterintelligence and Security Center. But what the thieves do with the information is another question.
“Certainly we are concerned about the destruction of data versus the theft of data,” he said. “It’s a different type of bad situation.” Destroyed or altered records would make a security clearance hard to keep or get.
And only then relays Evanina’s concerns about the more general counterintelligence concerns raised by the heist, that China will use the data to target people for recruitment. Evanina explains he’s more worried about those without extensive operational security training than those overseas who have that experience.
While dangers from the breach for intelligence community workers posted abroad have “the highest risk equation,” Evanina said “they also have the best training to prevent nefarious activity against them. It’s the individuals who don’t have that solid background and training that we’re most concerned with, initially, to provide them with awareness training of what can happen from a foreign intelligence service to them and what to look out for.”
Using stolen personal information to compromise intelligence community members is always a worry.
“That’s a concern we take seriously,” he said.
Curiously, given his concern about those individuals without a solid CI background, Evanina provides no hint of an answer to the questions posed to him in a Ron Wyden letter last week.
Evanina has asserted he’s particularly worried about the kind of people who would have clearance but not be in one of the better protected (CIA) databases. But was he particularly worried about those people — and therefore OPM’s databases — before the hack?
Beginning some time late morning, hundreds of flights were affected by the problem. The FAA’s service was restored around 4:00 p.m. EDT, though it would take hours longer for the airlines to reschedule flights and flyers.
Although 492 flights were delayed and 476 flights were canceled, the FAA’s Twitter account did not mention the outage or mass flight disruptions until 4:06 p.m., when it said service had been restored.
In a tweet issued long after the outage began, the Federal Aviation Administration said, “The FAA is continuing its root cause analysis to determine what caused the problem and is working closely with the airlines to minimize impacts to travelers.”
The FAA’s Safety Briefing Twitter account made no mention at all of the outage, though it has advised of GPS system testing at various locations across the country.
Various news outlets were conflicted: airports were blamed, then the FAA blamed, and the public knew nothing at all except they were stuck for an indeterminate period.
Get used to this. There’s no sign FAA will change its communications methodology after several air travel disruptions this year alone “due to technical issues” or whatever catchy nondescript phrase airlines/airports/government chooses to use.
Is this acceptable? Hell no. Just read the last version of WaPo’s article about the outage; the lack of communication causes as much difficulty as the loss of service. How can travelers make alternative plans when they hear nothing at all about the underlying problem? They’re stuck wherever they are, held hostage by crappy practices if not policies.
It doesn’t help that the media is challenged covering what appears to be a technology problem. The Washington Post went back and forth as to the underlying cause. The final version of an article about this disruption is clean of any mentions of the FAA’s En Route Automation Modernization (ERAM) system, though earlier versions mention an upgrade to or component of that system as suspect. Continue reading
The business community is launching a big push for the Cyber Information Sharing Act over the recess, with the Chamber of Commerce pushing hard and now the Financial Services Roundtable’s Tim Pawlenty weighing in today.
Pawlenty is fairly explicit about why banks want the bill: so that if they’re attacked and share data with the government, they cannot be sued for negligent maintenance of data.
“If I think you’ve attacked me and I turn that information over to the government, is that going to be subject to the Freedom of Information Act?” he said, highlighting a major issue for senators concerned about privacy.
“If so, are the trial lawyers going to get it and sue my company for negligent maintenance of data or cyber defenses?” Pawlenty continued. “Are my regulators going to get it and come back and throw me in jail, or fine me or sanction me? Is the public going to have access to it? Are my competitors going to have access to it? Are they going to be able to see my proprietary cyber systems in a way that will give up competitive advantage?”
CISA has been poorly framed, he explained.
“It should be called the cyber teamwork bill,” Pawlenty said.
As I’ve pointed out repeatedly, what the banks would get here is far more than they get under the Bank Secrecy Act, where they get immunity for sharing data, but are required to do certain things to protect against financial crimes.
Here, banks (and other corporations, but never natural people) get immunity without having to have done a damn thing to keep their customers safe.
Which is why CISA is counterproductive for cybersecurity.
When Jim Comey talks about wanting back doors into Apple products, he often claims that some software providers have managed to put back doors into allegedly secure products.
I keep thinking of that claim when I hear about the many privacy problems with Microsoft 10 — including the most recent report that it will send data to Microsoft even if you’ve disabled some of the spy features on the operating system. Is this the kind of thing Comey had in mind?
Sometimes you’ll need software updates to keep using the Services. We may automatically check your version of the software and download software updates or configuration changes, including those that prevent you from accessing the Services, playing counterfeit games, or using unauthorized hardware peripheral devices. You may also be required to update the software to continue using the Services.
Add that to this part of the Users Agreement, which permits Microsoft to retain, transmit, and reformat your content, in part “to protect you and the Services.”
To the extent necessary to provide the Services to you and others, to protect you and the Services, and to improve Microsoft products and services, you grant to Microsoft a worldwide and royalty-free intellectual property license to use Your Content, for example, to make copies of, retain, transmit, reformat, display, and distribute via communication tools Your Content on the Services.
The two together seem to broadly protect not just Microsoft sharing data with the government under CISA, but also deploying countermeasures, as permitted under the Cyber Intelligence Sharing Act.
(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, operate a defensive measure that is applied to—
(A) an information system of such private entity in order to protect the rights or property of the private entity;
(B) an information system of another entity upon written consent of such entity for operation of such defensive measure to protect the rights or property of such entity; and
This Service Agreement would seem to imply consent for automatic updates including those that disable what gets called a cybercrime under the bill (that is, counterfeit software) and a general consent to let Microsoft do what it needs to to “protect you and the Services.”
To be fair, the counterfeit clause is just one adopted from Xbox so it may not reflect anything new at all.
But given the presumption that some form of CISA will pass after Congress returns next month, I wonder how these clauses with work under CISA.
A key point in the ProPublica/NYT piece on AT&T’s close cooperation with the NSA (and, though not stated explicitly, other agencies) on spying is that AT&T was the telecom that helped NSA spy on the UN.
It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T.
If you read the underlying document, it actually shows that NSA had a traditional FISA order requiring the cooperation (remember, “agents of foreign powers,” as diplomats are, are among the legal wiretap targets under FISA, no matter what we might think about NSA spying on UN in our own country) — meaning whatever telecom serviced the UN legally had to turn over the data. And a big part of AT&T’s cooperation, in addition to technically improving data quality, involved filtering the data to help NSA avoid overload.
BLARNEY began intermittent enablement of DNI traffic for TOPI assessment and feedback. This feedback is being used by the BLARNEY target development team to support an ongoing filtering and throttling of data volumes. While BLARNEY is authorized full-take access under the NSA FISA, collected data volumes would flood PINWALE allocations within hours without a robust filtering mechanism.
In other words, AT&T helped NSA, ironically, by helping it limit what data it took in. Arguably, that’s an analytical role (who builds the algorithms in the filter?), but it’s one that limits how much actually gets turned over to the government.
That doesn’t mean the cooperation was any less valued, nor does it mean it didn’t go beyond what AT&T was legally obliged to do under the FISA order. But it’s not evidence AT&T would wiretap a non-legal (private corporation) target as a favor for NSA. That evidence may exist, somewhere, but it’s not in this story, except insofar as it mentions Stellar Wind, where AT&T was doing such things.
To be fair, AT&T’s UN cooperation is actually emphasized in this story because it was a key data point in the worthwhile ProPublica piece explaining how they proved Fairview was AT&T.
In April 2012, an internal NSA newsletter boasted about a successful operation in which NSA spied on the United Nations headquarters in New York City with the help of its Fairview and Blarney programs. Blarney is a program that undertakes surveillance that is authorized by the Foreign Intelligence Surveillance Court.
FAIRVIEW and BLARNEY engineers collaborated to enable the delivery of 700Mbps of paired packet switched traffic (DNI) traffic from access to an OC192 ring serving the United Nations mission in New York … FAIRVIEW engineers and the partner worked to provide the correct mapping, and BLARNEY worked with the partner to correct data quality issues so the data could be handed off to BLARNEY engineers to enable processing of the DNI traffic.
We found historical records showing that AT&T was paid $1 million a year to operate the U.N.’s fiber optic provider in 2011 and 2012. A spokesman for the U.N. secretary general confirmed that the organization “has a current contract with AT&T” to operate the fiber optic network at the U.N. headquarters in New York.
That is, the UN story is important largely because there are public records proving that AT&T was the provider in question, not because it’s the most egregious example of AT&T’s solicitous relationship with the nation’s spies.
Also in that story proving how they determined Fairview was AT&T and Stormbrew included Verizon was the slide above, bragging that the Comprehensive National Cybersecurity Initiative 100% subsidized Verizon’s Breckenridge site at a new cable landing carrying traffic from China.
It’s not entirely clear what that means — it might just refer to the SCIF, power supply, and servers needed to run the TURMOIL (that is, passive filtering) deployments the NSA wanted to track international traffic with China. But as ProPublica lays out, the NSA was involved the entire time Verizon was planning this cable landing. Another document on CNCI shows that in FY2010 — while significantly less than AT&T’s Fairview — NSA was dumping over $100M into Stormbrew and five times as much money into “cyber” than on FISA (in spite of the fact that they admit they’re really doing all this cybering to catch attacks on the US, meaning it has to ostensibly be conducted under FISA, even if FISC had not yet and may never have approved a cyber certificate for upstream 702). And those numbers date to the year after the Breckenridge project was put on line, and at a time when Verizon was backing off an earlier closer relationship with the Feds.
How much did Verizon really get for that cable landing, what did they provide in exchange, and given that this was purpose-built to focus on Chinese hacking 6 years ago, why is China still eating our lunch via hacking? And if taxpayers are already subsidizing Verizon 100% for capital investments, why are we still paying our cell phone bills?
Particularly given the clear focus on cyber at this cable landing, I recall the emphasis on Department of Commerce when discussing the government’s partnership with industry in PPD-20, covering authorizations for various cyber activities, including offensive cyberwar (note the warning I gave for how Americans would start to care about this Snowden disclosure once our rivals, like China, retaliate). That is, the government has Commerce use carrots and sticks to get cooperation from corporations, especially on cybersecurity.
None of this changes the fact that AT&T has long been all too happy to spy on its customers for the government. It just points to how little we know about these relationships, and how much quid pro quo there really is. We know from PRISM discussions that the providers could negotiate how they accomplished an order (as AT&T likely could with the order to wiretap the UN), and that’s one measure of “cooperation.” But there’s a whole lot else to this kind of cooperation.
Update: Credo released a statement in response to the story.
As a telecom that can be compelled to participate in unconstitutional surveillance, we know how important it is to fight for our customers’ privacy and only hand over information related to private communications when required by law,” said CREDO Mobile Vice President Becky Bond. “It’s beyond disturbing though sadly not surprising what’s being reported about a secret government relationship with AT&T that NSA documents describe as ‘highly collaborative’ and a ‘partnership, not a contractual relationship,’
CREDO Mobile supports full repeal of the illegal surveillance state as the only way to protect Americans from illegal government spying,” Bond continued, “and we challenge AT&T to demonstrate concern for its customers’ constitutional rights by joining us in public support of repealing both the Patriot Act and FISA Amendments Act.
During the July 1 Senate Judiciary Committee hearing on back doors, Deputy Attorney General Sally Yates claimed that the government doesn’t want the government to have back doors into encrypted communications. Rather, they wanted corporations to retain the back doors to be able to access communications if the government had legal process to do so. (After 1:43.)
We’re not going to ask the companies for any keys to the data. Instead, what we’re going to ask is that the companies have an ability to access it and then with lawful process we be able to get the information. That’s very different from what some other countries — other repressive regimes — from the way that they’re trying to get access to the information.
The claim was bizarre enough, especially as she went on to talk about other countries not having the same lawful process we have (as if that makes a difference to software code).
More importantly, that’s not true.
Remember what happened with Lavabit, when the FBI was in search of what is presumed to be Edward Snowden’s email. Lavabit owner Ladar Levison had a discussion with FBI about whether it was technically feasible to put a pen register on the targeted account. After which the FBI got a court order to do it. Levison tried to get the government to let him write a script that would provide them access to just the targeted account or, barring that, provide for some kind of audit to ensure the government wasn’t obtaining other customer data.
The unsealed documents describe a meeting on June 28th between the F.B.I. and Levison at Levison’s home in Dallas. There, according to the documents, Levison told the F.B.I. that he would not comply with the pen-register order and wanted to speak to an attorney. As the U.S. Attorney for the Eastern District of Virginia, Neil MacBride, described it, “It was unclear whether Mr. Levison would not comply with the order because it was technically not feasible or difficult, or because it was not consistent with his business practice in providing secure, encrypted e-mail service for his customers.” The meeting must have gone poorly for the F.B.I. because McBride filed a motion to compel Lavabit to comply with the pen-register and trap-and-trace order that very same day.
Magistrate Judge Theresa Carroll Buchanan granted the motion, inserting in her own handwriting that Lavabit was subject to “the possibility of criminal contempt of Court” if it failed to comply. When Levison didn’t comply, the government issued a summons, “United States of America v. Ladar Levison,” ordering him to explain himself on July 16th. The newly unsealed documents reveal tense talks between Levison and the F.B.I. in July. Levison wanted additional assurances that any device installed in the Lavabit system would capture only narrowly targeted data, and no more. He refused to provide real-time access to Lavabit data; he refused to go to court unless the government paid for his travel; and he refused to work with the F.B.I.’s technology unless the government paid him for “developmental time and equipment.” He instead offered to write an intercept code for the account’s metadata—for thirty-five hundred dollars. He asked Judge Hilton whether there could be “some sort of external audit” to make sure that the government did not take additional data. (The government plan did not include any oversight to which Levison would have access, he said.)
Most important, he refused to turn over the S.S.L. encryption keys that scrambled the messages of Lavabit’s customers, and which prevent third parties from reading them even if they obtain the messages.
The discussions disintegrated because the FBI refused to let Levison do what Yates now says they want to do: ensure that providers can hand over the data tailored to meet a specific request. That’s when Levison tried to give FBI his key in what it claimed (even though it has done the same for FOIAs and/or criminal discovery) was in a type too small to read.
On August 1st, Lavabit’s counsel, Jesse Binnall, reiterated Levison’s proposal that the government engage Levison to extract the information from the account himself rather than force him to turn over the S.S.L. keys.
THE COURT: You want to do it in a way that the government has to trust you—
BINNALL: Yes, Your Honor.
THE COURT: —to come up with the right data.
BINNALL: That’s correct, Your Honor.
THE COURT: And you won’t trust the government. So why would the government trust you?
Ultimately, the court ordered Levison to turn over the encryption key within twenty-four hours. Had the government taken Levison up on his offer, he may have provided it with Snowden’s data. Instead, by demanding the keys that unlocked all of Lavabit, the government provoked Levison to make a last stand. According to the U.S. Attorney MacBride’s motion for sanctions,
At approximately 1:30 p.m. CDT on August 2, 2013, Mr. Levison gave the F.B.I. a printout of what he represented to be the encryption keys needed to operate the pen register. This printout, in what appears to be four-point type, consists of eleven pages of largely illegible characters. To make use of these keys, the F.B.I. would have to manually input all two thousand five hundred and sixty characters, and one incorrect keystroke in this laborious process would render the F.B.I. collection system incapable of collecting decrypted data.
The U.S. Attorneys’ office called Lavabit’s lawyer, who responded that Levison “thinks” he could have an electronic version of the keys produced by August 5th.
Levison came away from the debacle believing that the FBI didn’t understand what it was asking for when they asked for his keys.
One result of this newfound expertise, however, is that Levison believes there is a knowledge gap between the Department of Justice and law-enforcement agencies; the former did not grasp the implications of what the F.B.I. was asking for when it demanded his S.S.L. keys.
There’s a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly — and they’re losing. This might explain Apple CEO Tim Cook’s somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.
Weaver’s post describes how, because of the need to allow users to access their iMessage account from multiple devices (think desktop, laptop, iPad, and phone), Apple technically could give FBI a key.
In iMessage, each device has its own key, but its important that the sent messages also show up on all of Alice’s devices. The process of Alice requesting her own keys also acts as a way for Alice’s phone to discover that there are new devices associated with Alice, effectively enabling Alice to check that her keys are correct and nobody has compromised her iCloud account to surreptitiously add another device.
But there remains a critical flaw: there is no user interface for Alice to discover (and therefore independently confirm) Bob’s keys. Without this feature, there is no way for Alice to detect that an Apple keyserver gave her a different set of keys for Bob. Without such an interface, iMessage is “backdoor enabled” by design: the keyserver itself provides the backdoor.
So to tap Alice, it is straightforward to modify the keyserver to present an additional FBI key for Alice to everyone but Alice. Now the FBI (but not Apple) can decrypt all iMessages sent to Alice in the future.
Admittedly, as heroic as Levison’s decision to shut down Lavabit rather than renege on a promise he made to his customers, Apple has a lot more to lose here strictly because of the scale involved. And in spite of the heated rhetoric, FBI likely still trusts Apple more than they trusted Levison.
Still, it’s worth noting that Yates’ claim that FBI doesn’t want keys to communications isn’t true — or at least wasn’t before her tenure at DAG. Because a provider, Levison, insisted on providing his customers what he had promised, the FBI grew so distrustful of him they did demand a key.
The Chamber of Commerce has a blog post pitching CISA today.
It’s mostly full of lies (see OTI’s @Robyn_Greene‘s timeline for an explication of those lies).
But given Sheldon Whitehouse’s admission the other day that the Chamber exercises pre-clearance vetoes over this bill, I’d like to consider what the Chamber gets out of CISA. It claims the bill, ” would help businesses achieve timely and actionable situational awareness to improve theirs and the nation’s detection, mitigation, and response capabilities against cyber threats.” At least according to the Chamber, this is about keeping businesses safe. Perhaps it pitches the bill in those terms because of its audience, other businesses. But given the gross asymmetry of the bill — where actual humans can be policed based on data turned over, but corporate people cannot be — I’m not so sure.
Particularly given increasing calls for effective cybersecurity legislation — something with actual teeth — at least for cars and critical infrastructure, this bill should be considered a corporatist effort to stave off more effective measures that would have a greater impact on cybersecurity.
That is borne out by the Chamber’s recent 5 reasons to support CISA post. It emphasizes two things that have nothing to do with efficacy: the voluntary nature of it, and the immunity, secrecy, and anti-trust provisions in the bill.
That is, the Chamber, which increasingly seems to be the biggest cheerleader for this bill, isn’t aiming to anything more than “situational awareness” to combat the scourge of hacking. But it wants that — the entire point of this bill — within a context that provides corporations with maximal flexibility while giving them protection they have to do nothing to earn.
CISA is about immunizing corporations to spy on their customers. That’s neither necessary nor the most effective means to combat hacking. Which ought to raise serious questions about the Chamber’s commitment to keeping America safe.
There are two things Cy Vance (writing with Paris’ Chief Prosecutor, the City of London Policy Commissioner, and Javier Zaragoza, Spain’s High Court) doesn’t mention in his op-ed calling for back doors in Apple and Google phones.
iPhone theft and bankster crime.
The former is a huge problem in NYC, with 8,465 iPhone thefts in 2013, which made up 18% of the grand larcenies in the city. The number came down 25% (and the crime started switching to Samsung products) last year, largely due to Apple’s implementation of a Kill Switch, but that still leaves 6,000 thefts a year — as compared to the 74 iPhones Vance says NYPD wasn’t able to access (he’s silent about how many investigations, besides the 3 he describes, that actually thwarted; Vance ignores default cloud storage completely in his op-ed). The numbers will come down still further now that Apple has made the Kill Switch (like encryption) a default setting on the iPhone 6. But there are still a lot of thefts, which can not only result in a phone being wiped and resold, but also an identity stolen. Default encryption will protect against both kinds of crime. In other words, Vance just ignores how encryption can help to prevent a crime that has been rampant in NYC in recent years.
Bankster crime is an even bigger problem in NYC, with a number of the worlds most sophisticated Transnational Crime Organizations, doing trillions of dollars of damage, headquartered in the city. These TCOs are even rolling out their very own encrypted communication system, which Elizabeth Warren fears may eliminate the last means of holding them accountable for their crimes. But Vance — one of the prosecutors that should be cracking down on this crime — not only doesn’t mention their special encrypted communication system, but he doesn’t mention their crimes at all.
There are other silences and blind spots in Vance’s op-ed, too. The example he starts with — a murder in Evanston, not any of the signees’ jurisdiction — describes two phones that couldn’t be accessed. He remains silent about the other evidence available by other means, such as via the cloud. Moreover, he assumes the evidence will be in the smart phone, which may not be the case. Moreover, it’s notable that Vance focuses on a black murder victim, because racial disparities in policing, not encryption, are often a better explanation for why murders of black men remain unsolved 2 months later. Given NYPD’s own crummy record at investigating and solving the murders of black and Latino victims, you’d think Vance might worry more about having NYPD reassign its detectives accordingly than stripping the privacy of hundreds of thousands.
Then Vance goes on to describe how much smart phone data they’re still getting.
In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.
Again, Vance is silent about whether this data is coming off the phone itself, or off the cloud. But it is better proof that investigators are still getting the data (perhaps via the cloud storage he doesn’t want to talk about?), not that they’re being thwarted.
Like Jim Comey, Vance claims to want to have a discussion weighing the “marginal benefits of full-disk encryption and the need for local law enforcement to solve and prosecute crimes.” But his op-ed is so dishonest, so riven with obvious holes, it raises real questions about both his honesty and basic logic.