Over at Salon, I’ve got a piece pushing back against claims that threats made by hackers attributed to — with little concrete evidence — North Korea is an attack on our First Amendment rights. It’s not. It’s an attack on Sony’s property (or, to put it another way, Sony’s right to make a profit off its speech). And as Rayne has pointed out, Sony was unbelievably negligent in protecting its own property.
The decision to pull the film has been criticized as an attack on free speech, most notably by Aaron Sorkin, but also by other commentators. “Today the U.S. succumbed to an unprecedented attack on our most cherished, bedrock principle of free speech,” Sorkin said. And free speech is one of the things — the last thing — Sony addressed in its statement on the decision. “We stand by our filmmakers and their right to free expression and are extremely disappointed by this outcome.”
But the threat against the film, which the Department of Homeland Security says is not credible, was only directed at one means of distributing the film: via theater release. A number of people suggested Sony should respond to the threat via other means. Mitt Romney suggested Sony release the film online, for free. Democratic congressman Steve Israel suggested Sony release it directly to DVD. BoingBoing’s Xeni Jardin suggested a global torrent party.
The point is, there are many ways to release the film, most of which would not expose theatergoers and theaters — in the wake of an altered liability landscape after the 2012 mass killing in an Aurora, Colorado, movie theater — to any danger, no matter how remote. Most of those ways would result in far more people watching the film. Some of them might even result in a few North Koreans viewing it.
If the issue is airing the views in the film — and defying the threats of the hackers — such a release would accomplish the goal.
But there’s another issue that seems far more central to this hack than speech: property.
Even before Sony mentioned its filmmakers’ free speech rights, for example, it mentioned the assault on its property rights. “Those who attacked us stole our intellectual property, private emails, and sensitive and proprietary material.” And while free release of its movie would assert its right to free speech, it would result in further financial losses, on top of the other movies (such as “Annie” and “Fury”) released on piracy sites after the hack.
The attack on Sony’s property, even more than speech, raises real questions about another detail that has gotten far too little attention during coverage of this hack. Sony Corp. gets hacked a lot, more than 50 breaches in 15 years, and more than some of its rivals, including some fairly significant attacks in recent years that bear no resemblance to this attack. Maybe that’s because it did things like store all its passwords in a file called “password.”
The Administration is already twisting itself in knots trying to retroactively include “multinational movie studio” into its prior definition of critical infrastructure (which normally would include things like electric grid and utilities) so it can make this a state issue. Assuming, all the while, that its certainty North Korea was behind the hack are more certain than that Iraq was behind 9/11.
We’d do well to think a bit about how central to national interests negligently-protected movie company property really is to national interests before this thing spirals out of control.
Ever try to follow an evolving story in which the cascade of trouble grew so big and moved so fast it was like trying to stay ahead of a pyroclastic flow?
That’s what it’s like keeping up with emerging reports about the massive cyber attack on Sony. (Granted, it’s nothing like the torture report, but Hollywood has a way of making the story spin harder when it’s about them.)
The second most ridiculous part of the Sony hack story is the way in which the entertainment industry has studiously avoided criticizing those most responsible for data security.
In late November, when the hacker(s) self-identified as “Guardians of Peace” made threats across Sony Pictures’ computer network before releasing digital film content, members of the entertainment industry were quick to revile pirates they believed were intent on stealing and distributing digital film content.
When reports emerged implicating North Korea as the alleged source of the hack, the industry backpedaled away from their outrage over piracy, mumbling instead about hackers.
The industry’s insiders shifted gears once again it was revealed that Sony’s passwords were in a password-protected file, and the password to this file was ‘password.‘
At this juncture you’d think Sony’s employees and contractors – whose Social Security numbers, addresses, emails, and other sensitive information had been exposed – would demand a corporate-wide purge of IT department and Sony executives.
You’d think that anyone affiliated with Sony, whose past and future business dealings might also be exposed would similarly demand expulsion of the incompetents who couldn’t find OPSEC if it was tattooed on their asses. Or perhaps investors and analysts would descend upon the corporation with pitchforks and torches, demanding heads on pikes because of teh stoopid.
Instead the industry has been tsk-tsking about the massive breach, all the while rummaging through the equivalent of Sony Pictures’ wide-open lingerie drawer, looking for industry intelligence. Reporting by entertainment industry news outlets has focused almost solely on the content of emails between executives.
But the first most ridiculous part of this massive assault on Sony is that Sony has been hacked more than 50 times in the last 15 years.
Yes. That’s More Than Fifty.
Inside Fifteen Years. Continue reading
Recently, computer security firm Symantec reported discovery of another intelligence-gathering malware, dubbing it “Regin.”
What’s particularly interesting about this malware is its targets:
Please do read Symantec’s blog post and its technical paper on Regin to understand how it works as well as its targets. Many news outlets either do not understand malware and cybersecurity, or they get facts wrong whenever major malware attacks are reported. Symantec’s revelation about Regin is no different in this respect.
Independent.ie offers a particularly exceptional example distorting Symantec’s report, claiming “Ireland is one of the countries worst hit globally by a dangerous new computer virus that spies on governments and companies, according to a leading technology firm.”
If by “worst hit,” they mean among the top four countries targeted by this malware? Sure. But only 9% of the infections affected Irish-based computers, versus 28% of infections aimed at Russian machines, and 24% affecting Saudi machines. The Independent.ie’s piece reads like clickbait hyperbole, or fearmongering, take your pick.
What wasn’t addressed by the Independent.ie and numerous other outlets, including those covering the tech sector are some fundamental questions:
The Guardian came closest to examining these issues, having interviewed researchers at computer security firm F-Secure to ask the origins of the malware. As of 24-NOV-2014, the firm’s Mikko Hypponen speculated that the US, UK, and/or Israel were behind Regin’s development and deployment.
As of the video embedded above, Hypponen firmly says the UK’s intelligence entity GCHQ is behind Regin, in particular the malware’s invasion of a Belgian telecom network (see video at 07:20). Continue reading
Steven Aftergood catches Charles McCullough, the Intelligence Community Inspector General who has resisted exercising oversight over spying, doing his job.
“A civilian employee with the Army Intelligence and Security Command made an IC IG Hotline complaint alleging an interagency data repository, believed to be comprised of numerous intelligence and non-intelligence sources, improperly included U.S. person data,” the IC IG wrote. “The complainant also reported he conducted potentially improper searches of the data repository to verify the presence of U.S. persons data. We are researching this claim.”
Given prior reports about ICREACH — which purportedly focuses on foreign collected data but therefore would include US person data collected overseas – this is not that surprising. (I don’t think this should be ICREACH, however, because that’s not explained as a repository.)
But I find it particularly interesting that this complaint comes from someone at INSCOM, the Army intelligence outfit where Keith Alexander tried to ingest US person data in 2001, only to have Mikey Hayden refuse (!).
The heartburn first flared up not long after the 2001 terrorist attacks. Alexander was the general in charge of the Army’s Intelligence and Security Command (INSCOM) at Fort Belvoir, Virginia. He began insisting that the NSA give him raw, unanalyzed data about suspected terrorists from the agency’s massive digital cache, according to three former intelligence officials. Alexander had been building advanced data-mining software and analytic tools, and now he wanted to run them against the NSA’s intelligence caches to try to find terrorists who were in the United States or planning attacks on the homeland.
By law, the NSA had to scrub intercepted communications of most references to U.S. citizens before those communications can be shared with other agencies. But Alexander wanted the NSA “to bend the pipe towards him,” says one of the former officials, so that he could siphon off metadata, the digital records of phone calls and email traffic that can be used to map out a terrorist organization based on its members’ communications patterns.
“Keith wanted his hands on the raw data. And he bridled at the fact that NSA didn’t want to release the information until it was properly reviewed and in a report,” says a former national security official. “He felt that from a tactical point of view, that was often too late to be useful.”
Hayden thought Alexander was out of bounds. INSCOM was supposed to provide battlefield intelligence for troops and special operations forces overseas, not use raw intelligence to find terrorists within U.S. borders. But Alexander had a more expansive view of what military intelligence agencies could do under the law.
“He said at one point that a lot of things aren’t clearly legal, but that doesn’t make them illegal,” says a former military intelligence officer who served under Alexander at INSCOM.
In November 2001, the general in charge of all Army intelligence had informed his personnel, including Alexander, that the military had broad authority to collect and share information about Americans, so long as they were “reasonably believed to be engaged” in terrorist activities, the general wrote in a widely distributed memo.
Indeed, given the timing (IC IG’s report describes this as happening in the fourth quarter of calendar year 2013, so in the months after this Shane Harris report), it’s possible this report is what led the tipster to check whether US person data was available in repositories available to INSCOM.
While INSCOM focuses on battlefield intelligence, it also does cybersecurity and force protection, the kind of thing that has, in the past, targeted Americans (even Americans peddling porn!). So while this might just reflect oversharing, it also might reflect a return to the mentality of Keith Alexander.
There are multiple reports that President Obama is considering nominating Jeh Johnson to head DOD.
I get the attraction. Obama and Johnson get along well. Johnson only recently left DOD, so he knows it — and the legal loopholes it exploits — well. And in Johnson, Obama would have someone who would gloss his warmaking as something noble.
I even think Obama might welcome the way such a nomination would heighten the confrontation with the GOP on immigration.
Still, Johnson has served as head of DHS for less than a year. His tenure is only now marking a transition from a period during which DHS had such a wildly spinning revolving door that it could begin to serve its alleged mission.
An exodus of top-level officials from the Department of Homeland Security is undercutting the agency’s ability to stay ahead of a range of emerging threats, including potential terrorist strikes and cyberattacks, according to interviews with current and former officials.
Over the past four years, employees have left DHS at a rate nearly twice as fast as in the federal government overall, and the trend is accelerating, according to a review of a federal database.
The departures are a result of what employees widely describe as a dysfunctional work environment, abysmal morale, and the lure of private security companies paying top dollar that have proliferated in Washington since the Sept. 11, 2001, attacks.
And all that’s on top of DHS’s almost impossible mandate, both because it is either too big or poorly defined.
Look, I’m sure Johnson’s a nice guy and maybe a great manager (he hasn’t been in place long enough for us to know).
But if DHS is a necessary agency, if its domestic spying and immigration and cybersecurity and disaster recovery missions are vital to this nation, if it is going to survive as a many-headed monster, then it should have the person Obama thinks is his best Agency head leading it. If that person is Johnson — as Obama’s consideration of him to lead DOD suggests — then moving him would seem to be a concession that DHS, and its obvious failures, really isn’t all that important after all.
If Obama moves Johnson from DHS to DOD, he should, at the same time, break DHS back up into more manageable agencies, declare the whole experiment an expensive failure, eliminate the word “Homeland” from our vocabularies. Because it is not working, and if there’s no urgency to make it work, then we should break it up into parts that can function competently again.
Mieke Eoyang, the Director of Third Way’s National Security Program, has what Ben Wittes bills as a “disruptive” idea: to make US law the exclusive means to conduct all surveillance involving US companies.
But reforming these programs doesn’t address another range of problems—those that relate to allegations of overseas collection from US companies without their cooperation.
Beyond 215 and FAA, media reports have suggested that there have been collection programs that occur outside of the companies’ knowledge. American technology companies have been outraged about media stories of US government intrusions onto their networks overseas, and the spoofing of their web pages or products, all unbeknownst to the companies. These stories suggest that the government is creating and sneaking through a back door to take the data. As one tech employee said to me, “the back door makes a mockery of the front door.”
As a result of these allegations, companies are moving to encrypt their data against their own government; they are limiting their cooperation with NSA; and they are pushing for reform. Negative international reactions to media reports of certain kinds of intelligence collection abroad have resulted in a backlash against American technology companies, spurring data localization requirements, rejection or cancellation of American contracts, and raising the specter of major losses in the cloud computing industry. These allegations could dim one of the few bright spots in the American economic recovery: tech.
How about making the FAA the exclusive means for conducting electronic surveillance when the information being collected is in the custody of an American company? This could clarify that the executive branch could not play authority shell-games and claim that Executive Order 12333 allows it to obtain information on overseas non-US person targets that is in the custody of American companies, unbeknownst to those companies.
As a policy matter, it seems to me that if the information to be acquired is in the custody of an American company, the intelligence community should ask for it, rather than take it without asking. American companies should be entitled to a higher degree of forthrightness from their government than foreign companies, even when they are acting overseas.
Now, I have nothing against this proposal. It seems necessary but wholly inadequate to restoring trust between the government and (some) Internet companies. Indeed, it represents what should have been the practice in any case.
Let me first take a detour and mention a few difficulties with this. First, while I suspect this might be workable for content collection, remember that the government was not just collecting content from Google and Yahoo overseas — they were also using their software to hack people. NSA is going to still want the authority to hack people using weaknesses in such software, such as it exists (and other software companies probably still are amenable to sharing those weaknesses). That points to the necessity to start talking about a legal regime for hacking as much as anything else — one that parallels what is going on with the FBI domestically.
Also, this idea would not cover the metadata collection from telecoms which are domestically covered by Section 215, which will surely increasingly involve cloud data that more closely parallels the data provided by FAA providers but that would be treated as EO 12333 overseas (because thus far metadata is still treated under the Third Party doctrine here). This extends to the Google and Yahoo metadata taken off switches overseas. So, such a solution would be either limited or (if and when courts domestically embrace a mosaic theory approach to data, including for national security applications) temporary, because some of the most revealing data is being handed over willingly by telecoms overseas.
Over at Vice, I have a piece reviewed DOJ’s explanation for why they turned off some alleged Asian mobsters DSL so they could then go in as fake DSL repairmen and collected evidence.
The whole thing has a Keystone cops character, especially since the DSL contractor they had roped into working with them screwed up turning off the DSLs, which is why they now claim he was on a “private frolic” when he collected information on his own (that is a technical legal term meaning “freelancing,” but one doing far more than the evidence allows, in my opinion).
My favorite part, though, is how DOJ claims that turning off someone’s DSL would not create any kind of urgency which would eliminate the notion of consent, because after all they could have used the shitty hotel WiFi.
Perhaps the most disturbing claim, though, is that we all have to be satisfied with crummy hotel Wi-Fi. To dismiss the argument that by turning off the villas’ DSL, FBI had created an urgent need that obviated any kind of consent when the villa residents let in the FBI agents pretending to be DSL repairmen, the government claims that there is no legitimate need to seek better internet access than hotel Wi-Fi or personal cell phone tethers: “Defendants do not identify a single legitimate service or application that could not be adequately supported through the hotel’s WI-FI system, their personal hotspots, or personal cellphones, nor could they.”
The FBI is now claiming, the experience of travelers the world over notwithstanding, that nothing legal could require better Internet access than a hotel’s slow Wi-Fi connection. (Perhaps the Wi-Fi in high-roller villas is better than it is for average travelers, but DOJ’s brief doesn’t make that case by describing the internet speeds Caesars Palace makes available to privileged guests.) Moreover, the government admits that—as many travelers reliant on hotel Wi-Fi can attest—the Wi-Fi just wasn’t all that fast. “The DSL service was faster,” the brief reads.
I mean, I’m not a Malaysian gangster or anything, but I often find myself trying to do things in hotel rooms where neither the WiFi nor my cell phone’s tether provides remotely adequate speed. You know — simple things like posting on a blog. Apparently that’s illegitimate now.
And yes, I have called hotel technicians to help me get the hotel WiFi working and let them right into my room.
Even as I was working on that piece, Kaspersky Lab came out with a warning that hackers (possibly working out of South Korea) have been targeting businessmen through hotel WiFis for 7 years.
Business executives visiting luxury hotels in Asia have been infected with malware delivered over public Wi-Fi networks, Russian security firm Kaspersky Lab has discovered.
The so-called ‘Darkhotel’ hackers managed to tweak their code to ensure that only machines belonging to specific targets were infected, not all visitors’ PCs, and may have included state-sponsored hacking.
They also seemed to have advance knowledge of their victims’ whereabouts and which hotels they would be visiting, Kaspersky said.
CEOs, senior vice presidents, sales and marketing directors and top research and development staff were amongst those on the attackers’ hit list, though no specific names have been revealed.
As soon as they logged onto the hotel Wi-Fi, targets would be greeted with a pop-up asking them to download updates to popular software, such as GoogleToolbar, Adobe Flash and Windows Messenger. But giving permission to the download would only lead to infection and subsequent theft of data from their devices.
You think alleged Asian organized crime members might know that hotel wifi is totally insecure (even setting aside China’s habit of stealing it this way)? You think they may have heard of their peers getting hacked in luxury hotels?
Maybe that’s why they ordered up so many DSL lines.
In any case, DOJ’s argument that there’s no legitimate need for wired Internet access just went out the window.
As I laid out when he gave his speech at Brookings, Jim Comey’s public explanation for needing back doors to Apple and Android phones doesn’t hold up. He conflated stored communication with communication in transit, ignored the risk of a back door (which he called a front door), and the law enforcement successes he presented, across the board, do not support his claim to need a back door.
So yesterday Comey and others had a classified briefing, where no one would be able to shred his flawed case.
FBI and Justice Department officials met with House staffers this week for a classified briefing on how encryption is hurting police investigations, according to staffers familiar with the meeting.
The briefing included Democratic and Republican aides for the House Judiciary and Intelligence Committees, the staffers said. The meeting was held in a classified room, and aides are forbidden from revealing what was discussed.
Comey called for Congress to revise the law to create a “level playing field” so that Google, Apple, and Facebook have the same obligation as AT&T and Verizon to help police.
National Journal listed out those companies, by the way — Facebook, for example, did not appear in Comey’s Brooking’s speech where he used the “level the playing field comment.”
I was puzzled by Comey’s inclusion of Facebook here until I saw this news.
To make their experience more consistent with our goals of accessibility and security, we have begun an experiment which makes Facebook available directly over Tor network at the following URL:
[ NOTE: link will only work in Tor-enabled browsers ]
Facebook Onion Address
Facebook’s onion address provides a way to access Facebook through Tor without losing the cryptographic protections provided by the Tor cloud.
The idea is that the Facebook onion address connects you to Facebook’s Core WWW Infrastructure - check the URL again, you’ll see what we did there – and it reflects one benefit of accessing Facebook this way: that it provides end-to-end communication, from your browser directly into a Facebook datacentre.
All that got me thinking about what Comey said in the classified briefing — in the real reason he wants to make us all less secure.
And I can’t help but wonder whether it’s metadata.
The government aspires to get universal potential coverage of telephony (at least) metadata under USA Freedom Act, with the ability to force cooperation. But I’m not sure that Apple, especially, would be able to provide iMessage metadata, meaning iPhone users can text without leaving metadata available to either AT&T (because it bypasses the telecom network) or Apple itself (because they no longer have guaranteed remote object).
And without metadata, FBI and NSA would be unable to demonstrate the need to do a wiretap of such content.
Ah well, once again I reflect on what a pity it is that FBI didn’t investigate the theft of data from these same companies, providing them a very good reason to lock it all up from sophisticated online criminals like GCHQ.
As you’ve likely read, NSA’s Chief Technology Officer has so little to keep him busy he’s also planning on working 20 hours a week for Keith Alexander’s new boondoggle.
Under the arrangement, which was confirmed by Alexander and current intelligence officials, NSA’s Chief Technical Officer, Patrick Dowd, is allowed to work up to 20 hours a week at IronNet Cybersecurity Inc, the private firm led by Alexander, a retired Army general and his former boss.
The arrangement was approved by top NSA managers, current and former officials said. It does not appear to break any laws and it could not be determined whether Dowd has actually begun working for Alexander, who retired from the NSA in March.
Dowd is the guy with whom Alexander filed 7 patents for work developed at NSA.
During his time at the NSA, Alexander said he filed seven patents, four of which are still pending, that relate to an “end-to-end cybersecurity solution.” Alexander said his co-inventor on the patents was Patrick Dowd, the chief technical officer and chief architect of the NSA. Alexander said the patented solution, which he wouldn’t describe in detail given the sensitive nature of the work, involved “a line of thought about how you’d systematically do cybersecurity in a network.”
That sounds hard to distinguish from Alexander’s new venture. But, he insisted, the behavior modeling and other key characteristics represent a fundamentally new approach that will “jump” ahead of the technology that’s now being used in government and in the private sector.
Presumably, bringing Dowd on board will both make Alexander look more technologically credible and let Dowd profit off all the new patents Alexander is filing for, which he claims don’t derive from work taxpayers paid for.
Capitalism, baby! Privatizing the profits paid for by the public!
All that said, I’m wondering whether this is about something else — and not just greed.
Yesterday, as part of a bankster cybersecurity shindig, one of Alexander’s big named clients, SIFMA, rolled out its “Cybersecurity Regulatory Guidance.” It’s about what you’d expect from a bankster organization: demands that the government give what it needs, use a uniform light hand while regulating, show some flexibility in case that light hand becomes onerous, and never ever hold the financial industry accountable for its own shortcomings.
Bullet point 2 (Bullet point 1 basically says the US government has a big role to play here which may be true but also sounds like a demand for a handout) lays out the kind of public-private partnership SIFMA expects.
Principle 2: Recognize the Value of Public–Private Collaboration in the Development of Agency Guidance
Each party brings knowledge and influence that is required to be successful, and each has a role in making protections effective. Firms can assist regulators in making agency guidance better and more effective as it is in everyone’s best interests to protect the financial industry and the customers it serves.
The NIST Cybersecurity Framework is a useful model of public-private cooperation that should guide the development of agency guidance. NIST has done a tremendous job reaching out to stakeholders and strengthening collaboration with financial critical infrastructure. It is through such collaboration that voluntary standards for cybersecurity can be developed. NIST has raised awareness about the standards, encouraged its use, assisted the financial sector in refining its application to financial critical infrastructure components, and incorporated feedback from members of the financial sector.
In this vein, we suggest that an agency working group be established that can facilitate coordination across the agencies, including independent agencies and SROs, and receive industry feedback on suggested approaches to cybersecurity. SIFMA views the improvement of cybersecurity regulatory guidance and industry improvement efforts as an ongoing process.
Effective collaboration between the private and public sectors is critical today and in the future as the threat and the sector’s capabilities continue to evolve.
Again, this public-private partnership may be necessary in the case of cybersecurity for critical infrastructure, but banks have a history of treating such partnership as lucrative handouts (and the principle document’s concern about privacy has more to do with hiding their own deeds, and only secondarily discusses the trust of their customers). Moreover, experience suggests that when “firms assist regulators in making agency guidance better,” it usually has to do with socializing risk.
In any case, given that the banks are, once again, demanding socialism to protect themselves, is it any wonder NSA’s top technology officer is spending half his days at a boondoggle serving these banks?
And given the last decade of impunity the banks have enjoyed, what better place to roll out an exotic counter-attacking cybersecurity approach (except for the risk that it’ll bring down the fragile house of finance cards by mistake)?
Alexander said that his new approach is different than anything that’s been done before because it uses “behavioral models” to help predict what a hacker is likely to do. Rather than relying on analysis of malicious software to try to catch a hacker in the act, Alexander aims to spot them early on in their plots.
One of the most recent stories on the JP Morgan hack (which actually appears to be the kind of Treasuremapping NSA does of other country’s critical infrastructure all the time) made it clear the banksters are already doing the kind of data sharing that Keith Alexander wailed he needed immunity to encourage.
The F.B.I., after being contacted by JPMorgan, took the I.P. addresses the hackers were believed to have used to breach JPMorgan’s system to other financial institutions, including Deutsche Bank and Bank of America, these people said. The purpose: to see whether the same intruders had tried to hack into their systems as well. The banks are also sharing information among themselves.
So clearly SIFMA’s call for sharing represents something more, probably akin to the kind of socialism it benefits from in its members’ core business models.
In the intelligence world, they use the term “sheep dip” to describe how they stick people subject to one authority — such as the SEALs who killed Osama bin Laden — under a more convenient authority — such as CIA’s covert status. Maybe that’s what’s really going on here: sheep dipping NSA’s top tech person into the private sector where his work will evade even the scant oversight given to NSA.
If SIFMA’s looking for the kind of socialistic sharing akin to free money, then why should we be surprised the boondoggle at the center of it plans to share actual tech personnel?
Update: Reuters reports the deal’s off. Apparently even Congress (beyond Alan Grayson, who has long had questions about Alexander’s boondoggle) had a problem with this.
I said somewhere that those wailing about Apple’s new default crypto in its handsets are either lying or are confused about the difference between a phone service and a storage device.
For the moment, I’m going to put FBI Director Jim Comey in the latter category. I’m going to do so, first, because at his Brookings talk he corrected his false statement — which I had pointed out — on 60 Minutes (what he calls insufficiently lawyered) that the FBI cannot get content without an order. Though while Comey admitted that FBI can read content it has collected incidentally, he made another misleading statement. He said FBI does so during “investigations. They also do so during “assessments,” which don’t require anywhere near the same standard of evidence or oversight to do.
I’m also going to assume Comey is having service/device confusion because that kind of confusion permeated his presentation more generally.
There was the confusion exhibited when he tried to suggest a “back door” into a device wasn’t one if FBI simply called it a “front door.”
We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
And more specifically, when Comey called for rewriting CALEA, he called for something that would affect only a tiny bit of what Apple had made unavailable by encrypting its phones.
Current law governing the interception of communications requires telecommunication carriers and broadband providers to build interception capabilities into their networks for court-ordered surveillance. But that law, the Communications Assistance for Law Enforcement Act, or CALEA, was enacted 20 years ago—a lifetime in the Internet age. And it doesn’t cover new means of communication. Thousands of companies provide some form of communication service, and most are not required by statute to provide lawful intercept capabilities to law enforcement. [my emphasis]
As I have noted, the main thing that will become unavailable under Apple’s new operating system is iMessage chats if the users are not using default iCloud back-ups (which would otherwise keep a copy of the chat).
But the rest of it — all the data that will be stored only on an iPhone if people opt out of Apple’s default iCloud backups — will be unaffected if what Comey is planning to do is require intercept ability for every message sent.
Now consider the 5 examples Comey uses to claim FBI needs this. I’ll return to these later, but in almost all cases, Comey seems to be overselling his case.
First, there’s the case of two phones with content on them.
In Louisiana, a known sex offender posed as a teenage girl to entice a 12-year-old boy to sneak out of his house to meet the supposed young girl. This predator, posing as a taxi driver, murdered the young boy, and tried to alter and delete evidence on both his and the victim’s cell phones to cover up his crime. Both phones were instrumental in showing that the suspect enticed this child into his taxi. He was sentenced to death in April of this year.
On first glance this sounds like a case where the phones were needed. But assuming this is the case in question, it appears wrong. The culprit, Brian Horn, was IDed by multiple witnesses as being in the neighborhood, and evidence led to his cab. There was DNA evidence. And Horn and his victim had exchange texts. Presumably, records of those texts, and quite possibly the actual content, were available at the provider.
Then there’s another texting case.
In Los Angeles, police investigated the death of a 2-year-old girl from blunt force trauma to her head. There were no witnesses. Text messages from the parents’ cell phones to one another, and to their family members, proved the mother caused this young girl’s death, and that the father knew what was happening and failed to stop it.
Text messages also proved that the defendants failed to seek medical attention for hours while their daughter convulsed in her crib. They even went so far as to paint her tiny body with blue paint—to cover her bruises—before calling 911. Confronted with this evidence, both parents pled guilty.
This seems to be another case where the texts were probably available in other places, especially given how many people received them.
Then there’s another texting story — this is the only one where Comey mentioned warrants, and therefore the only real parallel to what he’s pitching.
In Kansas City, the DEA investigated a drug trafficking organization tied to heroin distribution, homicides, and robberies. The DEA obtained search warrants for several phones used by the group. Text messages found on the phones outlined the group’s distribution chain and tied the group to a supply of lethal heroin that had caused 12 overdoses—and five deaths—including several high school students.
Again, these texts were likely available with the providers.
Then Comey lists a case where the culprits were first found with a traffic camera.
In Sacramento, a young couple and their four dogs were walking down the street at night when a car ran a red light and struck them—killing their four dogs, severing the young man’s leg, and leaving the young woman in critical condition. The driver left the scene, and the young man died days later.
Using “red light cameras” near the scene of the accident, the California Highway Patrol identified and arrested a suspect and seized his smartphone. GPS data on his phone placed the suspect at the scene of the accident, and revealed that he had fled California shortly thereafter. He was convicted of second-degree murder and is serving a sentence of 25 years to life.
It uses GPS data, which would surely have been available from the provider. So traffic camera, GPS. Seriously, FBI, do you think this makes your case?
Perhaps Comey’s only convincing example involves exoneration involving a video — though that too would have been available elsewhere on Apple’s default settings.
The evidence we find also helps exonerate innocent people. In Kansas, data from a cell phone was used to prove the innocence of several teens accused of rape. Without access to this phone, or the ability to recover a deleted video, several innocent young men could have been wrongly convicted.
Again, given Apple’s default settings, this video would be available on iCloud. But if it was only available on the phone, and it was the only thing that exonerated the men, then it would count.
Update: I’m not sure, but this sounds like the Daisy Coleman case, which was outside Kansas City, MO, but did involve a phone video that (at least as far as I know) was never recovered. I don’t think the video ever was found. The guy she accused of raping her plead guilty to misdemeanor child endangerment — he dumped her unconscious in freezing weather outside her house.
I will keep checking into these, but none of these are definite cases. All of this evidence would normally, given default settings, be available from providers. Much of it would be available on phones of people besides the culprit. In the one easily identifiable case, there was a ton of other evidence. In two of these cases, the evidence was important in getting a guilty plea, not in solving the crime.
But underlying it all is the key point: Phones are storage devices, but they are primarily communication devices, and even as storage devices the default is that they’re just a localized copy of data also stored elsewhere. That means it is very rare that evidence is only available on a phone. Which means it is rare that such evidence will only be available in storage and not via intercept or remote storage.