There are multiple reports that President Obama is considering nominating Jeh Johnson to head DOD.
I get the attraction. Obama and Johnson get along well. Johnson only recently left DOD, so he knows it — and the legal loopholes it exploits — well. And in Johnson, Obama would have someone who would gloss his warmaking as something noble.
I even think Obama might welcome the way such a nomination would heighten the confrontation with the GOP on immigration.
Still, Johnson has served as head of DHS for less than a year. His tenure is only now marking a transition from a period during which DHS had such a wildly spinning revolving door that it could begin to serve its alleged mission.
An exodus of top-level officials from the Department of Homeland Security is undercutting the agency’s ability to stay ahead of a range of emerging threats, including potential terrorist strikes and cyberattacks, according to interviews with current and former officials.
Over the past four years, employees have left DHS at a rate nearly twice as fast as in the federal government overall, and the trend is accelerating, according to a review of a federal database.
The departures are a result of what employees widely describe as a dysfunctional work environment, abysmal morale, and the lure of private security companies paying top dollar that have proliferated in Washington since the Sept. 11, 2001, attacks.
And all that’s on top of DHS’s almost impossible mandate, both because it is either too big or poorly defined.
Look, I’m sure Johnson’s a nice guy and maybe a great manager (he hasn’t been in place long enough for us to know).
But if DHS is a necessary agency, if its domestic spying and immigration and cybersecurity and disaster recovery missions are vital to this nation, if it is going to survive as a many-headed monster, then it should have the person Obama thinks is his best Agency head leading it. If that person is Johnson — as Obama’s consideration of him to lead DOD suggests — then moving him would seem to be a concession that DHS, and its obvious failures, really isn’t all that important after all.
If Obama moves Johnson from DHS to DOD, he should, at the same time, break DHS back up into more manageable agencies, declare the whole experiment an expensive failure, eliminate the word “Homeland” from our vocabularies. Because it is not working, and if there’s no urgency to make it work, then we should break it up into parts that can function competently again.
Mieke Eoyang, the Director of Third Way’s National Security Program, has what Ben Wittes bills as a “disruptive” idea: to make US law the exclusive means to conduct all surveillance involving US companies.
But reforming these programs doesn’t address another range of problems—those that relate to allegations of overseas collection from US companies without their cooperation.
Beyond 215 and FAA, media reports have suggested that there have been collection programs that occur outside of the companies’ knowledge. American technology companies have been outraged about media stories of US government intrusions onto their networks overseas, and the spoofing of their web pages or products, all unbeknownst to the companies. These stories suggest that the government is creating and sneaking through a back door to take the data. As one tech employee said to me, “the back door makes a mockery of the front door.”
As a result of these allegations, companies are moving to encrypt their data against their own government; they are limiting their cooperation with NSA; and they are pushing for reform. Negative international reactions to media reports of certain kinds of intelligence collection abroad have resulted in a backlash against American technology companies, spurring data localization requirements, rejection or cancellation of American contracts, and raising the specter of major losses in the cloud computing industry. These allegations could dim one of the few bright spots in the American economic recovery: tech.
How about making the FAA the exclusive means for conducting electronic surveillance when the information being collected is in the custody of an American company? This could clarify that the executive branch could not play authority shell-games and claim that Executive Order 12333 allows it to obtain information on overseas non-US person targets that is in the custody of American companies, unbeknownst to those companies.
As a policy matter, it seems to me that if the information to be acquired is in the custody of an American company, the intelligence community should ask for it, rather than take it without asking. American companies should be entitled to a higher degree of forthrightness from their government than foreign companies, even when they are acting overseas.
Now, I have nothing against this proposal. It seems necessary but wholly inadequate to restoring trust between the government and (some) Internet companies. Indeed, it represents what should have been the practice in any case.
Let me first take a detour and mention a few difficulties with this. First, while I suspect this might be workable for content collection, remember that the government was not just collecting content from Google and Yahoo overseas — they were also using their software to hack people. NSA is going to still want the authority to hack people using weaknesses in such software, such as it exists (and other software companies probably still are amenable to sharing those weaknesses). That points to the necessity to start talking about a legal regime for hacking as much as anything else — one that parallels what is going on with the FBI domestically.
Also, this idea would not cover the metadata collection from telecoms which are domestically covered by Section 215, which will surely increasingly involve cloud data that more closely parallels the data provided by FAA providers but that would be treated as EO 12333 overseas (because thus far metadata is still treated under the Third Party doctrine here). This extends to the Google and Yahoo metadata taken off switches overseas. So, such a solution would be either limited or (if and when courts domestically embrace a mosaic theory approach to data, including for national security applications) temporary, because some of the most revealing data is being handed over willingly by telecoms overseas.
Over at Vice, I have a piece reviewed DOJ’s explanation for why they turned off some alleged Asian mobsters DSL so they could then go in as fake DSL repairmen and collected evidence.
The whole thing has a Keystone cops character, especially since the DSL contractor they had roped into working with them screwed up turning off the DSLs, which is why they now claim he was on a “private frolic” when he collected information on his own (that is a technical legal term meaning “freelancing,” but one doing far more than the evidence allows, in my opinion).
My favorite part, though, is how DOJ claims that turning off someone’s DSL would not create any kind of urgency which would eliminate the notion of consent, because after all they could have used the shitty hotel WiFi.
Perhaps the most disturbing claim, though, is that we all have to be satisfied with crummy hotel Wi-Fi. To dismiss the argument that by turning off the villas’ DSL, FBI had created an urgent need that obviated any kind of consent when the villa residents let in the FBI agents pretending to be DSL repairmen, the government claims that there is no legitimate need to seek better internet access than hotel Wi-Fi or personal cell phone tethers: “Defendants do not identify a single legitimate service or application that could not be adequately supported through the hotel’s WI-FI system, their personal hotspots, or personal cellphones, nor could they.”
The FBI is now claiming, the experience of travelers the world over notwithstanding, that nothing legal could require better Internet access than a hotel’s slow Wi-Fi connection. (Perhaps the Wi-Fi in high-roller villas is better than it is for average travelers, but DOJ’s brief doesn’t make that case by describing the internet speeds Caesars Palace makes available to privileged guests.) Moreover, the government admits that—as many travelers reliant on hotel Wi-Fi can attest—the Wi-Fi just wasn’t all that fast. “The DSL service was faster,” the brief reads.
I mean, I’m not a Malaysian gangster or anything, but I often find myself trying to do things in hotel rooms where neither the WiFi nor my cell phone’s tether provides remotely adequate speed. You know — simple things like posting on a blog. Apparently that’s illegitimate now.
And yes, I have called hotel technicians to help me get the hotel WiFi working and let them right into my room.
Even as I was working on that piece, Kaspersky Lab came out with a warning that hackers (possibly working out of South Korea) have been targeting businessmen through hotel WiFis for 7 years.
Business executives visiting luxury hotels in Asia have been infected with malware delivered over public Wi-Fi networks, Russian security firm Kaspersky Lab has discovered.
The so-called ‘Darkhotel’ hackers managed to tweak their code to ensure that only machines belonging to specific targets were infected, not all visitors’ PCs, and may have included state-sponsored hacking.
They also seemed to have advance knowledge of their victims’ whereabouts and which hotels they would be visiting, Kaspersky said.
CEOs, senior vice presidents, sales and marketing directors and top research and development staff were amongst those on the attackers’ hit list, though no specific names have been revealed.
As soon as they logged onto the hotel Wi-Fi, targets would be greeted with a pop-up asking them to download updates to popular software, such as GoogleToolbar, Adobe Flash and Windows Messenger. But giving permission to the download would only lead to infection and subsequent theft of data from their devices.
You think alleged Asian organized crime members might know that hotel wifi is totally insecure (even setting aside China’s habit of stealing it this way)? You think they may have heard of their peers getting hacked in luxury hotels?
Maybe that’s why they ordered up so many DSL lines.
In any case, DOJ’s argument that there’s no legitimate need for wired Internet access just went out the window.
As I laid out when he gave his speech at Brookings, Jim Comey’s public explanation for needing back doors to Apple and Android phones doesn’t hold up. He conflated stored communication with communication in transit, ignored the risk of a back door (which he called a front door), and the law enforcement successes he presented, across the board, do not support his claim to need a back door.
So yesterday Comey and others had a classified briefing, where no one would be able to shred his flawed case.
FBI and Justice Department officials met with House staffers this week for a classified briefing on how encryption is hurting police investigations, according to staffers familiar with the meeting.
The briefing included Democratic and Republican aides for the House Judiciary and Intelligence Committees, the staffers said. The meeting was held in a classified room, and aides are forbidden from revealing what was discussed.
Comey called for Congress to revise the law to create a “level playing field” so that Google, Apple, and Facebook have the same obligation as AT&T and Verizon to help police.
National Journal listed out those companies, by the way — Facebook, for example, did not appear in Comey’s Brooking’s speech where he used the “level the playing field comment.”
I was puzzled by Comey’s inclusion of Facebook here until I saw this news.
To make their experience more consistent with our goals of accessibility and security, we have begun an experiment which makes Facebook available directly over Tor network at the following URL:
[ NOTE: link will only work in Tor-enabled browsers ]
Facebook Onion Address
Facebook’s onion address provides a way to access Facebook through Tor without losing the cryptographic protections provided by the Tor cloud.
The idea is that the Facebook onion address connects you to Facebook’s Core WWW Infrastructure - check the URL again, you’ll see what we did there – and it reflects one benefit of accessing Facebook this way: that it provides end-to-end communication, from your browser directly into a Facebook datacentre.
All that got me thinking about what Comey said in the classified briefing — in the real reason he wants to make us all less secure.
And I can’t help but wonder whether it’s metadata.
The government aspires to get universal potential coverage of telephony (at least) metadata under USA Freedom Act, with the ability to force cooperation. But I’m not sure that Apple, especially, would be able to provide iMessage metadata, meaning iPhone users can text without leaving metadata available to either AT&T (because it bypasses the telecom network) or Apple itself (because they no longer have guaranteed remote object).
And without metadata, FBI and NSA would be unable to demonstrate the need to do a wiretap of such content.
Ah well, once again I reflect on what a pity it is that FBI didn’t investigate the theft of data from these same companies, providing them a very good reason to lock it all up from sophisticated online criminals like GCHQ.
As you’ve likely read, NSA’s Chief Technology Officer has so little to keep him busy he’s also planning on working 20 hours a week for Keith Alexander’s new boondoggle.
Under the arrangement, which was confirmed by Alexander and current intelligence officials, NSA’s Chief Technical Officer, Patrick Dowd, is allowed to work up to 20 hours a week at IronNet Cybersecurity Inc, the private firm led by Alexander, a retired Army general and his former boss.
The arrangement was approved by top NSA managers, current and former officials said. It does not appear to break any laws and it could not be determined whether Dowd has actually begun working for Alexander, who retired from the NSA in March.
Dowd is the guy with whom Alexander filed 7 patents for work developed at NSA.
During his time at the NSA, Alexander said he filed seven patents, four of which are still pending, that relate to an “end-to-end cybersecurity solution.” Alexander said his co-inventor on the patents was Patrick Dowd, the chief technical officer and chief architect of the NSA. Alexander said the patented solution, which he wouldn’t describe in detail given the sensitive nature of the work, involved “a line of thought about how you’d systematically do cybersecurity in a network.”
That sounds hard to distinguish from Alexander’s new venture. But, he insisted, the behavior modeling and other key characteristics represent a fundamentally new approach that will “jump” ahead of the technology that’s now being used in government and in the private sector.
Presumably, bringing Dowd on board will both make Alexander look more technologically credible and let Dowd profit off all the new patents Alexander is filing for, which he claims don’t derive from work taxpayers paid for.
Capitalism, baby! Privatizing the profits paid for by the public!
All that said, I’m wondering whether this is about something else — and not just greed.
Yesterday, as part of a bankster cybersecurity shindig, one of Alexander’s big named clients, SIFMA, rolled out its “Cybersecurity Regulatory Guidance.” It’s about what you’d expect from a bankster organization: demands that the government give what it needs, use a uniform light hand while regulating, show some flexibility in case that light hand becomes onerous, and never ever hold the financial industry accountable for its own shortcomings.
Bullet point 2 (Bullet point 1 basically says the US government has a big role to play here which may be true but also sounds like a demand for a handout) lays out the kind of public-private partnership SIFMA expects.
Principle 2: Recognize the Value of Public–Private Collaboration in the Development of Agency Guidance
Each party brings knowledge and influence that is required to be successful, and each has a role in making protections effective. Firms can assist regulators in making agency guidance better and more effective as it is in everyone’s best interests to protect the financial industry and the customers it serves.
The NIST Cybersecurity Framework is a useful model of public-private cooperation that should guide the development of agency guidance. NIST has done a tremendous job reaching out to stakeholders and strengthening collaboration with financial critical infrastructure. It is through such collaboration that voluntary standards for cybersecurity can be developed. NIST has raised awareness about the standards, encouraged its use, assisted the financial sector in refining its application to financial critical infrastructure components, and incorporated feedback from members of the financial sector.
In this vein, we suggest that an agency working group be established that can facilitate coordination across the agencies, including independent agencies and SROs, and receive industry feedback on suggested approaches to cybersecurity. SIFMA views the improvement of cybersecurity regulatory guidance and industry improvement efforts as an ongoing process.
Effective collaboration between the private and public sectors is critical today and in the future as the threat and the sector’s capabilities continue to evolve.
Again, this public-private partnership may be necessary in the case of cybersecurity for critical infrastructure, but banks have a history of treating such partnership as lucrative handouts (and the principle document’s concern about privacy has more to do with hiding their own deeds, and only secondarily discusses the trust of their customers). Moreover, experience suggests that when “firms assist regulators in making agency guidance better,” it usually has to do with socializing risk.
In any case, given that the banks are, once again, demanding socialism to protect themselves, is it any wonder NSA’s top technology officer is spending half his days at a boondoggle serving these banks?
And given the last decade of impunity the banks have enjoyed, what better place to roll out an exotic counter-attacking cybersecurity approach (except for the risk that it’ll bring down the fragile house of finance cards by mistake)?
Alexander said that his new approach is different than anything that’s been done before because it uses “behavioral models” to help predict what a hacker is likely to do. Rather than relying on analysis of malicious software to try to catch a hacker in the act, Alexander aims to spot them early on in their plots.
One of the most recent stories on the JP Morgan hack (which actually appears to be the kind of Treasuremapping NSA does of other country’s critical infrastructure all the time) made it clear the banksters are already doing the kind of data sharing that Keith Alexander wailed he needed immunity to encourage.
The F.B.I., after being contacted by JPMorgan, took the I.P. addresses the hackers were believed to have used to breach JPMorgan’s system to other financial institutions, including Deutsche Bank and Bank of America, these people said. The purpose: to see whether the same intruders had tried to hack into their systems as well. The banks are also sharing information among themselves.
So clearly SIFMA’s call for sharing represents something more, probably akin to the kind of socialism it benefits from in its members’ core business models.
In the intelligence world, they use the term “sheep dip” to describe how they stick people subject to one authority — such as the SEALs who killed Osama bin Laden — under a more convenient authority — such as CIA’s covert status. Maybe that’s what’s really going on here: sheep dipping NSA’s top tech person into the private sector where his work will evade even the scant oversight given to NSA.
If SIFMA’s looking for the kind of socialistic sharing akin to free money, then why should we be surprised the boondoggle at the center of it plans to share actual tech personnel?
Update: Reuters reports the deal’s off. Apparently even Congress (beyond Alan Grayson, who has long had questions about Alexander’s boondoggle) had a problem with this.
I said somewhere that those wailing about Apple’s new default crypto in its handsets are either lying or are confused about the difference between a phone service and a storage device.
For the moment, I’m going to put FBI Director Jim Comey in the latter category. I’m going to do so, first, because at his Brookings talk he corrected his false statement — which I had pointed out — on 60 Minutes (what he calls insufficiently lawyered) that the FBI cannot get content without an order. Though while Comey admitted that FBI can read content it has collected incidentally, he made another misleading statement. He said FBI does so during “investigations. They also do so during “assessments,” which don’t require anywhere near the same standard of evidence or oversight to do.
I’m also going to assume Comey is having service/device confusion because that kind of confusion permeated his presentation more generally.
There was the confusion exhibited when he tried to suggest a “back door” into a device wasn’t one if FBI simply called it a “front door.”
We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
And more specifically, when Comey called for rewriting CALEA, he called for something that would affect only a tiny bit of what Apple had made unavailable by encrypting its phones.
Current law governing the interception of communications requires telecommunication carriers and broadband providers to build interception capabilities into their networks for court-ordered surveillance. But that law, the Communications Assistance for Law Enforcement Act, or CALEA, was enacted 20 years ago—a lifetime in the Internet age. And it doesn’t cover new means of communication. Thousands of companies provide some form of communication service, and most are not required by statute to provide lawful intercept capabilities to law enforcement. [my emphasis]
As I have noted, the main thing that will become unavailable under Apple’s new operating system is iMessage chats if the users are not using default iCloud back-ups (which would otherwise keep a copy of the chat).
But the rest of it — all the data that will be stored only on an iPhone if people opt out of Apple’s default iCloud backups — will be unaffected if what Comey is planning to do is require intercept ability for every message sent.
Now consider the 5 examples Comey uses to claim FBI needs this. I’ll return to these later, but in almost all cases, Comey seems to be overselling his case.
First, there’s the case of two phones with content on them.
In Louisiana, a known sex offender posed as a teenage girl to entice a 12-year-old boy to sneak out of his house to meet the supposed young girl. This predator, posing as a taxi driver, murdered the young boy, and tried to alter and delete evidence on both his and the victim’s cell phones to cover up his crime. Both phones were instrumental in showing that the suspect enticed this child into his taxi. He was sentenced to death in April of this year.
On first glance this sounds like a case where the phones were needed. But assuming this is the case in question, it appears wrong. The culprit, Brian Horn, was IDed by multiple witnesses as being in the neighborhood, and evidence led to his cab. There was DNA evidence. And Horn and his victim had exchange texts. Presumably, records of those texts, and quite possibly the actual content, were available at the provider.
Then there’s another texting case.
In Los Angeles, police investigated the death of a 2-year-old girl from blunt force trauma to her head. There were no witnesses. Text messages from the parents’ cell phones to one another, and to their family members, proved the mother caused this young girl’s death, and that the father knew what was happening and failed to stop it.
Text messages also proved that the defendants failed to seek medical attention for hours while their daughter convulsed in her crib. They even went so far as to paint her tiny body with blue paint—to cover her bruises—before calling 911. Confronted with this evidence, both parents pled guilty.
This seems to be another case where the texts were probably available in other places, especially given how many people received them.
Then there’s another texting story — this is the only one where Comey mentioned warrants, and therefore the only real parallel to what he’s pitching.
In Kansas City, the DEA investigated a drug trafficking organization tied to heroin distribution, homicides, and robberies. The DEA obtained search warrants for several phones used by the group. Text messages found on the phones outlined the group’s distribution chain and tied the group to a supply of lethal heroin that had caused 12 overdoses—and five deaths—including several high school students.
Again, these texts were likely available with the providers.
Then Comey lists a case where the culprits were first found with a traffic camera.
In Sacramento, a young couple and their four dogs were walking down the street at night when a car ran a red light and struck them—killing their four dogs, severing the young man’s leg, and leaving the young woman in critical condition. The driver left the scene, and the young man died days later.
Using “red light cameras” near the scene of the accident, the California Highway Patrol identified and arrested a suspect and seized his smartphone. GPS data on his phone placed the suspect at the scene of the accident, and revealed that he had fled California shortly thereafter. He was convicted of second-degree murder and is serving a sentence of 25 years to life.
It uses GPS data, which would surely have been available from the provider. So traffic camera, GPS. Seriously, FBI, do you think this makes your case?
Perhaps Comey’s only convincing example involves exoneration involving a video — though that too would have been available elsewhere on Apple’s default settings.
The evidence we find also helps exonerate innocent people. In Kansas, data from a cell phone was used to prove the innocence of several teens accused of rape. Without access to this phone, or the ability to recover a deleted video, several innocent young men could have been wrongly convicted.
Again, given Apple’s default settings, this video would be available on iCloud. But if it was only available on the phone, and it was the only thing that exonerated the men, then it would count.
Update: I’m not sure, but this sounds like the Daisy Coleman case, which was outside Kansas City, MO, but did involve a phone video that (at least as far as I know) was never recovered. I don’t think the video ever was found. The guy she accused of raping her plead guilty to misdemeanor child endangerment — he dumped her unconscious in freezing weather outside her house.
I will keep checking into these, but none of these are definite cases. All of this evidence would normally, given default settings, be available from providers. Much of it would be available on phones of people besides the culprit. In the one easily identifiable case, there was a ton of other evidence. In two of these cases, the evidence was important in getting a guilty plea, not in solving the crime.
But underlying it all is the key point: Phones are storage devices, but they are primarily communication devices, and even as storage devices the default is that they’re just a localized copy of data also stored elsewhere. That means it is very rare that evidence is only available on a phone. Which means it is rare that such evidence will only be available in storage and not via intercept or remote storage.
Today, Jim Comey will give what will surely be an aggressively moderated (by Ben Wittes!) talk at Brookings, arguing that Apple should not offer its customers basic privacy tools (congratulations to NYT’s Michael Schmidt for beating the rush of publishing credulous reports on this speech).
Mr. Comey will say that encryption technologies used on these devices, like the new iPhone, have become so sophisticated that crimes will go unsolved because law enforcement officers will not be able to get information from them, according to a senior F.B.I. official who provided a preview of the speech.
Never mind the numbers, which I laid out here. While Apple doesn’t break out its device requests last year, it says the vast majority of the 3,431 device requests it responded to last year were in response to a lost or stolen phone request, not law enforcement seeking data on the holder. Given that iPhones represent the better part of the estimated 3.1 million phones that will be stolen this year, that’s a modest claim. Moreover, given that Apple only provided content off the cloud to law enforcement 155 times last year, it’s unlikely we’re talking a common law enforcement practice.
At least not with warrants. Warrantless fishing expeditions are another issue.
As far back as 2010, CBP was conducting 4,600 device searches at the border. Given that 20% of the country will be carrying iPhones this year, and a much higher number of the Americans who cross international borders will be carrying one, a reasonable guess would be that CBP searches 1,000 iPhones a year (and it could be several times that). Cops used to be able to do the same at traffic stops until this year’s Riley v, California decision; I’ve not seen numbers on how many searches they did, but given that most of those were (like the border searches) fishing expeditions, it’s not clear how many will be able to continue, because law enforcement won’t have probable cause to get a warrant.
So the claims law enforcement is making about needing to get content stored on and only on iPhones with a warrant doesn’t hold up, except for very narrow exceptions (cops may lose access to iMessage conversations if all users in question know not to store those conversations on iCloud, which is otherwise the default).
But that’s not the best argument I’ve seen for why Comey should back off this campaign.
As a number of people (including the credulous Schmidt) point out, Comey repeated his attack on Apple on the 60 Minutes show Sunday.
James Comey: The notion that we would market devices that would allow someone to place themselves beyond the law, troubles me a lot. As a country, I don’t know why we would want to put people beyond the law. That is, sell cars with trunks that couldn’t ever be opened by law enforcement with a court order, or sell an apartment that could never be entered even by law enforcement. Would you want to live in that neighborhood? This is a similar concern. The notion that people have devices, again, that with court orders, based on a showing of probable cause in a case involving kidnapping or child exploitation or terrorism, we could never open that phone? My sense is that we’ve gone too far when we’ve gone there
What no one I’ve seen points out is there was an equally charismatic FBI Director named Jim Comey on 60 Minutes a week ago Sunday (these are actually the same interview, or at least use the same clip to marvel that Comey is 6’8″, which raises interesting questions about why both these clips weren’t on the same show).
That Jim Comey made a really compelling argument about how most people don’t understand how vulnerable they are now that they live their lives online.
James Comey: I don’t think so. I think there’s something about sitting in front of your own computer working on your own banking, your own health care, your own social life that makes it hard to understand the danger. I mean, the Internet is the most dangerous parking lot imaginable. But if you were crossing a mall parking lot late at night, your entire sense of danger would be heightened. You would stand straight. You’d walk quickly. You’d know where you were going. You would look for light. Folks are wandering around that proverbial parking lot of the Internet all day long, without giving it a thought to whose attachments they’re opening, what sites they’re visiting. And that makes it easy for the bad guys.
Scott Pelley: So tell folks at home what they need to know.
James Comey: When someone sends you an email, they are knocking on your door. And when you open the attachment, without looking through the peephole to see who it is, you just opened the door and let a stranger into your life, where everything you care about is.
That Jim Comey — the guy worried about victims of computer crime — laid out the horrible things that can happen when criminals access all the data you’ve got on devices.
Scott Pelley: And what might that attachment do?
James Comey: Well, take over the computer, lock the computer, and then demand a ransom payment before it would unlock. Steal images from your system of your children or your, you know, or steal your banking information, take your entire life.
Now, victim-concerned Jim Comey seems to think we can avoid such vulnerability by educating people not to click on any attachment they might have. But of course, for the millions who have their cell phones stolen, they don’t even need to click on an attachment. The crooks will have all their victims’ data available in their hand.
Unless, of course, users have made that data inaccessible. One easy way to do that is by making easy encryption the default.
Victim-concerned Jim Comey might offer 60 Minute viewers two pieces of advice: be careful of what you click on, and encrypt those devices that you carry with you — at risk of being lost or stolen — all the time.
Of course, that would set off a pretty intense fight with fear-monger Comey, the guy showing up to Brookings today to argue Apple’s customers shouldn’t have this common sense protection.
That would be a debate I’d enjoy Ben Wittes trying to debate.
JPMorgan’s Form 8-K filed on Thursday with the Securities and Exchange Commission advises:
On October 2, 2014, JPMorgan Chase & Co. (“JPMorgan Chase” or the “Firm”) updated information for its customers, on its Chase.com and JPMorganOnline websites and on the Chase and J.P. Morgan mobile applications, about the previously disclosed cyberattack against the Firm. The Firm disclosed that:
• User contact information – name, address, phone number and email address – and internal JPMorgan Chase information relating to such users have been compromised.
• The compromised data impacts approximately 76 million households and 7 million small businesses.
• However, there is no evidence that account information for such affected customers – account numbers, passwords, user IDs, dates of birth or Social Security numbers – was compromised during this attack.
• As of such date, the Firm continues not to have seen any unusual customer fraud related to this incident.
• JPMorgan Chase customers are not liable for unauthorized transactions on their account that they promptly alert the Firm to.
The Firm continues to vigilantly monitor the situation and is continuing to investigate the matter. In addition, the Firm is fully cooperating with government agencies in connection with their investigations.
According to ZDNet, a forensic security firm suggests the bank’s users’ accounts are now at greater risk of compromise and that password changes and two-factor authentication should be implemented to address the risk.
However, the 8-K’s wording indicates a different security risk altogether as the users’ passwords and Social Security numbers are not compromised.
The disclosure of information compromised combined with earlier reporting about the breach more closely matches a description of that collected by National Security Agency’s TREASURE MAP intelligence collection program. TREASURE MAP gathered information about networks including nodes, but not data created by users at the end nodes of the network. The application delineated the path to the ends. and physical ends, not merely virtual ends of the network. Continue reading
Map the entire Internet — any device, anywhere, all the time. — NSA TREASUREMAP PPT
Last week, The Intercept and Spiegel broke the story of NSA’s TREASUREMAP, an effort to map cyberspace, relying on both NSA’s defensive (IAD) and offensive (TAO) faces.
As Rayne laid out, it aspires to map out cyberspace down to the device level. As all great military mapping does, this will permit the US to identify strategic weaknesses and visualize a battlefield — even before many of adversaries realize they’re on a battlefield.
Against that background, NYT provided more details on the penetration of JP Morgan’s networks that has been blamed on Russia. The new details make it clear this was about reconnaissance, not — at least not yet — theft.
Over two months, hackers gained entry to dozens of the bank’s servers, said three people with knowledge of the bank’s investigation into the episode who spoke on the condition of anonymity. This, they said, potentially gave the hackers a window into how the bank’s individual computers work.
They said it might be difficult for the bank to find every last vulnerability and be sure that its systems were thoroughly secured against future attack.
The hackers were able to review information about a million customer accounts and gain access to a list of the software applications installed on the bank’s computers. One person briefed said more than 90 of the bank’s servers were affected, effectively giving the hackers high-level administrative privileges in the systems.
Hackers can potentially crosscheck JPMorgan programs and applications with known security weaknesses, looking for one that has not yet been patched so they can regain access.
Though the infiltrators did observe metadata — which, the NSA assures us, is not really all that compromising.
A fourth person with knowledge of the matter, also speaking on condition of anonymity, said hackers had not gained access to account holders’ financial information or Social Security numbers, and may have reviewed only names, addresses and phone numbers.
I’m not trying to make light of the mapping of one of America’s most important banks. Surely, such surveillance may enable the same kind of sophisticated attack we launched against Iran, having done similar kind of preparation.
But we should keep in mind what the US has been doing as we consider these reports. If and when Russia or Germany catch us conducting similar reconnaissance on the networks of their private companies, they will surely make a big stink, as we have been with JP Morgan (though the response to the Spiegel story has been muted enough I suspect Germany’s intelligence services knew about that one, particularly given NSA’s reliance on Germany for targets in Africa).
But if the US is going to treat digital reconnaissance as routine spying (and the President’s cyberwar Presidential Policy Directive makes it pretty clear we consider our own similar reconnaissance to be mere clandestine spying), then we should expect the same treatment of our most lucrative targets.
That doesn’t make it legal or acceptable. But that does make it equivalent to what we’re doing to the rest of the world.
One final point. If you’re going to map the entire Internet, any device, anywhere, by definition you need to map America’s Internet as well. Are we so sure our own Intelligence Community hasn’t been snooping in JP Morgan’s networks?
The most chilling part of this reporting is a network engineer’s reaction (see here on video) when he realizes he is marked or targeted as a subject of observation. He’s assured it’s not personal, it’s about the work he does – but his reaction still telegraphs stress. An intelligence agency can get to him, has gotten to him; he’s touchable.
The truth is that almost any of us who follow national security, cyber warfare, or information technology are potential subjects depending on our work or play.
The metadata we generate is only part of the observation process; it provides information about our individual patterns of behavior, but may not actually disclose where we are.
TREASURE MAP goes further, by providing the layout of the network on which any of us are generating metadata. But there is some other component either within TREASURE MAP, or within a complementary tool, that provides the physical address of any networked electronic device.
The NSA has the ability to track individuals not only by Internet Protocol addresses (IP addresses), but by media access control addresses (MAC addresses), according a recent interview with Snowden by James Bamford in Wired. This little nugget was a throwaway; perhaps readers already assumed this capability has existed, or didn’t understand the implications:
…But Snowden’s disenchantment would only grow. It was bad enough when spies were getting bankers drunk to recruit them; now he was learning about targeted killings and mass surveillance, all piped into monitors at the NSA facilities around the world. Snowden would watch as military and CIA drones silently turned people into body parts. And he would also begin to appreciate the enormous scope of the NSA’s surveillance capabilities, an ability to map the movement of everyone in a city by monitoring their MAC address, a unique identifier emitted by every cell phone, computer, and other electronic device.
In simple terms, IP addresses are like phone numbers — they are assigned. They can be static; a printer on a business network, for example, may be assigned a static address to assure it is always available to accept print orders at a stationary location. IP addresses may also be dynamic; if there’s an ongoing change in users on a network, allowing them to use a temporary address works best. Think of visits to your local coffee shop where customers use WiFi as an example. When they leave the premise, their IP address will soon revert to the pool available on the WiFi router. Continue reading