As you’ve likely read, NSA’s Chief Technology Officer has so little to keep him busy he’s also planning on working 20 hours a week for Keith Alexander’s new boondoggle.
Under the arrangement, which was confirmed by Alexander and current intelligence officials, NSA’s Chief Technical Officer, Patrick Dowd, is allowed to work up to 20 hours a week at IronNet Cybersecurity Inc, the private firm led by Alexander, a retired Army general and his former boss.
The arrangement was approved by top NSA managers, current and former officials said. It does not appear to break any laws and it could not be determined whether Dowd has actually begun working for Alexander, who retired from the NSA in March.
Dowd is the guy with whom Alexander filed 7 patents for work developed at NSA.
During his time at the NSA, Alexander said he filed seven patents, four of which are still pending, that relate to an “end-to-end cybersecurity solution.” Alexander said his co-inventor on the patents was Patrick Dowd, the chief technical officer and chief architect of the NSA. Alexander said the patented solution, which he wouldn’t describe in detail given the sensitive nature of the work, involved “a line of thought about how you’d systematically do cybersecurity in a network.”
That sounds hard to distinguish from Alexander’s new venture. But, he insisted, the behavior modeling and other key characteristics represent a fundamentally new approach that will “jump” ahead of the technology that’s now being used in government and in the private sector.
Presumably, bringing Dowd on board will both make Alexander look more technologically credible and let Dowd profit off all the new patents Alexander is filing for, which he claims don’t derive from work taxpayers paid for.
Capitalism, baby! Privatizing the profits paid for by the public!
All that said, I’m wondering whether this is about something else — and not just greed.
Yesterday, as part of a bankster cybersecurity shindig, one of Alexander’s big named clients, SIFMA, rolled out its “Cybersecurity Regulatory Guidance.” It’s about what you’d expect from a bankster organization: demands that the government give what it needs, use a uniform light hand while regulating, show some flexibility in case that light hand becomes onerous, and never ever hold the financial industry accountable for its own shortcomings.
Bullet point 2 (Bullet point 1 basically says the US government has a big role to play here which may be true but also sounds like a demand for a handout) lays out the kind of public-private partnership SIFMA expects.
Principle 2: Recognize the Value of Public–Private Collaboration in the Development of Agency Guidance
Each party brings knowledge and influence that is required to be successful, and each has a role in making protections effective. Firms can assist regulators in making agency guidance better and more effective as it is in everyone’s best interests to protect the financial industry and the customers it serves.
The NIST Cybersecurity Framework is a useful model of public-private cooperation that should guide the development of agency guidance. NIST has done a tremendous job reaching out to stakeholders and strengthening collaboration with financial critical infrastructure. It is through such collaboration that voluntary standards for cybersecurity can be developed. NIST has raised awareness about the standards, encouraged its use, assisted the financial sector in refining its application to financial critical infrastructure components, and incorporated feedback from members of the financial sector.
In this vein, we suggest that an agency working group be established that can facilitate coordination across the agencies, including independent agencies and SROs, and receive industry feedback on suggested approaches to cybersecurity. SIFMA views the improvement of cybersecurity regulatory guidance and industry improvement efforts as an ongoing process.
Effective collaboration between the private and public sectors is critical today and in the future as the threat and the sector’s capabilities continue to evolve.
Again, this public-private partnership may be necessary in the case of cybersecurity for critical infrastructure, but banks have a history of treating such partnership as lucrative handouts (and the principle document’s concern about privacy has more to do with hiding their own deeds, and only secondarily discusses the trust of their customers). Moreover, experience suggests that when “firms assist regulators in making agency guidance better,” it usually has to do with socializing risk.
In any case, given that the banks are, once again, demanding socialism to protect themselves, is it any wonder NSA’s top technology officer is spending half his days at a boondoggle serving these banks?
And given the last decade of impunity the banks have enjoyed, what better place to roll out an exotic counter-attacking cybersecurity approach (except for the risk that it’ll bring down the fragile house of finance cards by mistake)?
Alexander said that his new approach is different than anything that’s been done before because it uses “behavioral models” to help predict what a hacker is likely to do. Rather than relying on analysis of malicious software to try to catch a hacker in the act, Alexander aims to spot them early on in their plots.
One of the most recent stories on the JP Morgan hack (which actually appears to be the kind of Treasuremapping NSA does of other country’s critical infrastructure all the time) made it clear the banksters are already doing the kind of data sharing that Keith Alexander wailed he needed immunity to encourage.
The F.B.I., after being contacted by JPMorgan, took the I.P. addresses the hackers were believed to have used to breach JPMorgan’s system to other financial institutions, including Deutsche Bank and Bank of America, these people said. The purpose: to see whether the same intruders had tried to hack into their systems as well. The banks are also sharing information among themselves.
So clearly SIFMA’s call for sharing represents something more, probably akin to the kind of socialism it benefits from in its members’ core business models.
In the intelligence world, they use the term “sheep dip” to describe how they stick people subject to one authority — such as the SEALs who killed Osama bin Laden — under a more convenient authority — such as CIA’s covert status. Maybe that’s what’s really going on here: sheep dipping NSA’s top tech person into the private sector where his work will evade even the scant oversight given to NSA.
If SIFMA’s looking for the kind of socialistic sharing akin to free money, then why should we be surprised the boondoggle at the center of it plans to share actual tech personnel?
Update: Reuters reports the deal’s off. Apparently even Congress (beyond Alan Grayson, who has long had questions about Alexander’s boondoggle) had a problem with this.
I said somewhere that those wailing about Apple’s new default crypto in its handsets are either lying or are confused about the difference between a phone service and a storage device.
For the moment, I’m going to put FBI Director Jim Comey in the latter category. I’m going to do so, first, because at his Brookings talk he corrected his false statement — which I had pointed out — on 60 Minutes (what he calls insufficiently lawyered) that the FBI cannot get content without an order. Though while Comey admitted that FBI can read content it has collected incidentally, he made another misleading statement. He said FBI does so during “investigations. They also do so during “assessments,” which don’t require anywhere near the same standard of evidence or oversight to do.
I’m also going to assume Comey is having service/device confusion because that kind of confusion permeated his presentation more generally.
There was the confusion exhibited when he tried to suggest a “back door” into a device wasn’t one if FBI simply called it a “front door.”
We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
And more specifically, when Comey called for rewriting CALEA, he called for something that would affect only a tiny bit of what Apple had made unavailable by encrypting its phones.
Current law governing the interception of communications requires telecommunication carriers and broadband providers to build interception capabilities into their networks for court-ordered surveillance. But that law, the Communications Assistance for Law Enforcement Act, or CALEA, was enacted 20 years ago—a lifetime in the Internet age. And it doesn’t cover new means of communication. Thousands of companies provide some form of communication service, and most are not required by statute to provide lawful intercept capabilities to law enforcement. [my emphasis]
As I have noted, the main thing that will become unavailable under Apple’s new operating system is iMessage chats if the users are not using default iCloud back-ups (which would otherwise keep a copy of the chat).
But the rest of it — all the data that will be stored only on an iPhone if people opt out of Apple’s default iCloud backups — will be unaffected if what Comey is planning to do is require intercept ability for every message sent.
Now consider the 5 examples Comey uses to claim FBI needs this. I’ll return to these later, but in almost all cases, Comey seems to be overselling his case.
First, there’s the case of two phones with content on them.
In Louisiana, a known sex offender posed as a teenage girl to entice a 12-year-old boy to sneak out of his house to meet the supposed young girl. This predator, posing as a taxi driver, murdered the young boy, and tried to alter and delete evidence on both his and the victim’s cell phones to cover up his crime. Both phones were instrumental in showing that the suspect enticed this child into his taxi. He was sentenced to death in April of this year.
On first glance this sounds like a case where the phones were needed. But assuming this is the case in question, it appears wrong. The culprit, Brian Horn, was IDed by multiple witnesses as being in the neighborhood, and evidence led to his cab. There was DNA evidence. And Horn and his victim had exchange texts. Presumably, records of those texts, and quite possibly the actual content, were available at the provider.
Then there’s another texting case.
In Los Angeles, police investigated the death of a 2-year-old girl from blunt force trauma to her head. There were no witnesses. Text messages from the parents’ cell phones to one another, and to their family members, proved the mother caused this young girl’s death, and that the father knew what was happening and failed to stop it.
Text messages also proved that the defendants failed to seek medical attention for hours while their daughter convulsed in her crib. They even went so far as to paint her tiny body with blue paint—to cover her bruises—before calling 911. Confronted with this evidence, both parents pled guilty.
This seems to be another case where the texts were probably available in other places, especially given how many people received them.
Then there’s another texting story — this is the only one where Comey mentioned warrants, and therefore the only real parallel to what he’s pitching.
In Kansas City, the DEA investigated a drug trafficking organization tied to heroin distribution, homicides, and robberies. The DEA obtained search warrants for several phones used by the group. Text messages found on the phones outlined the group’s distribution chain and tied the group to a supply of lethal heroin that had caused 12 overdoses—and five deaths—including several high school students.
Again, these texts were likely available with the providers.
Then Comey lists a case where the culprits were first found with a traffic camera.
In Sacramento, a young couple and their four dogs were walking down the street at night when a car ran a red light and struck them—killing their four dogs, severing the young man’s leg, and leaving the young woman in critical condition. The driver left the scene, and the young man died days later.
Using “red light cameras” near the scene of the accident, the California Highway Patrol identified and arrested a suspect and seized his smartphone. GPS data on his phone placed the suspect at the scene of the accident, and revealed that he had fled California shortly thereafter. He was convicted of second-degree murder and is serving a sentence of 25 years to life.
It uses GPS data, which would surely have been available from the provider. So traffic camera, GPS. Seriously, FBI, do you think this makes your case?
Perhaps Comey’s only convincing example involves exoneration involving a video — though that too would have been available elsewhere on Apple’s default settings.
The evidence we find also helps exonerate innocent people. In Kansas, data from a cell phone was used to prove the innocence of several teens accused of rape. Without access to this phone, or the ability to recover a deleted video, several innocent young men could have been wrongly convicted.
Again, given Apple’s default settings, this video would be available on iCloud. But if it was only available on the phone, and it was the only thing that exonerated the men, then it would count.
Update: I’m not sure, but this sounds like the Daisy Coleman case, which was outside Kansas City, MO, but did involve a phone video that (at least as far as I know) was never recovered. I don’t think the video ever was found. The guy she accused of raping her plead guilty to misdemeanor child endangerment — he dumped her unconscious in freezing weather outside her house.
I will keep checking into these, but none of these are definite cases. All of this evidence would normally, given default settings, be available from providers. Much of it would be available on phones of people besides the culprit. In the one easily identifiable case, there was a ton of other evidence. In two of these cases, the evidence was important in getting a guilty plea, not in solving the crime.
But underlying it all is the key point: Phones are storage devices, but they are primarily communication devices, and even as storage devices the default is that they’re just a localized copy of data also stored elsewhere. That means it is very rare that evidence is only available on a phone. Which means it is rare that such evidence will only be available in storage and not via intercept or remote storage.
Today, Jim Comey will give what will surely be an aggressively moderated (by Ben Wittes!) talk at Brookings, arguing that Apple should not offer its customers basic privacy tools (congratulations to NYT’s Michael Schmidt for beating the rush of publishing credulous reports on this speech).
Mr. Comey will say that encryption technologies used on these devices, like the new iPhone, have become so sophisticated that crimes will go unsolved because law enforcement officers will not be able to get information from them, according to a senior F.B.I. official who provided a preview of the speech.
Never mind the numbers, which I laid out here. While Apple doesn’t break out its device requests last year, it says the vast majority of the 3,431 device requests it responded to last year were in response to a lost or stolen phone request, not law enforcement seeking data on the holder. Given that iPhones represent the better part of the estimated 3.1 million phones that will be stolen this year, that’s a modest claim. Moreover, given that Apple only provided content off the cloud to law enforcement 155 times last year, it’s unlikely we’re talking a common law enforcement practice.
At least not with warrants. Warrantless fishing expeditions are another issue.
As far back as 2010, CBP was conducting 4,600 device searches at the border. Given that 20% of the country will be carrying iPhones this year, and a much higher number of the Americans who cross international borders will be carrying one, a reasonable guess would be that CBP searches 1,000 iPhones a year (and it could be several times that). Cops used to be able to do the same at traffic stops until this year’s Riley v, California decision; I’ve not seen numbers on how many searches they did, but given that most of those were (like the border searches) fishing expeditions, it’s not clear how many will be able to continue, because law enforcement won’t have probable cause to get a warrant.
So the claims law enforcement is making about needing to get content stored on and only on iPhones with a warrant doesn’t hold up, except for very narrow exceptions (cops may lose access to iMessage conversations if all users in question know not to store those conversations on iCloud, which is otherwise the default).
But that’s not the best argument I’ve seen for why Comey should back off this campaign.
As a number of people (including the credulous Schmidt) point out, Comey repeated his attack on Apple on the 60 Minutes show Sunday.
James Comey: The notion that we would market devices that would allow someone to place themselves beyond the law, troubles me a lot. As a country, I don’t know why we would want to put people beyond the law. That is, sell cars with trunks that couldn’t ever be opened by law enforcement with a court order, or sell an apartment that could never be entered even by law enforcement. Would you want to live in that neighborhood? This is a similar concern. The notion that people have devices, again, that with court orders, based on a showing of probable cause in a case involving kidnapping or child exploitation or terrorism, we could never open that phone? My sense is that we’ve gone too far when we’ve gone there
What no one I’ve seen points out is there was an equally charismatic FBI Director named Jim Comey on 60 Minutes a week ago Sunday (these are actually the same interview, or at least use the same clip to marvel that Comey is 6’8″, which raises interesting questions about why both these clips weren’t on the same show).
That Jim Comey made a really compelling argument about how most people don’t understand how vulnerable they are now that they live their lives online.
James Comey: I don’t think so. I think there’s something about sitting in front of your own computer working on your own banking, your own health care, your own social life that makes it hard to understand the danger. I mean, the Internet is the most dangerous parking lot imaginable. But if you were crossing a mall parking lot late at night, your entire sense of danger would be heightened. You would stand straight. You’d walk quickly. You’d know where you were going. You would look for light. Folks are wandering around that proverbial parking lot of the Internet all day long, without giving it a thought to whose attachments they’re opening, what sites they’re visiting. And that makes it easy for the bad guys.
Scott Pelley: So tell folks at home what they need to know.
James Comey: When someone sends you an email, they are knocking on your door. And when you open the attachment, without looking through the peephole to see who it is, you just opened the door and let a stranger into your life, where everything you care about is.
That Jim Comey — the guy worried about victims of computer crime — laid out the horrible things that can happen when criminals access all the data you’ve got on devices.
Scott Pelley: And what might that attachment do?
James Comey: Well, take over the computer, lock the computer, and then demand a ransom payment before it would unlock. Steal images from your system of your children or your, you know, or steal your banking information, take your entire life.
Now, victim-concerned Jim Comey seems to think we can avoid such vulnerability by educating people not to click on any attachment they might have. But of course, for the millions who have their cell phones stolen, they don’t even need to click on an attachment. The crooks will have all their victims’ data available in their hand.
Unless, of course, users have made that data inaccessible. One easy way to do that is by making easy encryption the default.
Victim-concerned Jim Comey might offer 60 Minute viewers two pieces of advice: be careful of what you click on, and encrypt those devices that you carry with you — at risk of being lost or stolen — all the time.
Of course, that would set off a pretty intense fight with fear-monger Comey, the guy showing up to Brookings today to argue Apple’s customers shouldn’t have this common sense protection.
That would be a debate I’d enjoy Ben Wittes trying to debate.
JPMorgan’s Form 8-K filed on Thursday with the Securities and Exchange Commission advises:
On October 2, 2014, JPMorgan Chase & Co. (“JPMorgan Chase” or the “Firm”) updated information for its customers, on its Chase.com and JPMorganOnline websites and on the Chase and J.P. Morgan mobile applications, about the previously disclosed cyberattack against the Firm. The Firm disclosed that:
• User contact information – name, address, phone number and email address – and internal JPMorgan Chase information relating to such users have been compromised.
• The compromised data impacts approximately 76 million households and 7 million small businesses.
• However, there is no evidence that account information for such affected customers – account numbers, passwords, user IDs, dates of birth or Social Security numbers – was compromised during this attack.
• As of such date, the Firm continues not to have seen any unusual customer fraud related to this incident.
• JPMorgan Chase customers are not liable for unauthorized transactions on their account that they promptly alert the Firm to.
The Firm continues to vigilantly monitor the situation and is continuing to investigate the matter. In addition, the Firm is fully cooperating with government agencies in connection with their investigations.
According to ZDNet, a forensic security firm suggests the bank’s users’ accounts are now at greater risk of compromise and that password changes and two-factor authentication should be implemented to address the risk.
However, the 8-K’s wording indicates a different security risk altogether as the users’ passwords and Social Security numbers are not compromised.
The disclosure of information compromised combined with earlier reporting about the breach more closely matches a description of that collected by National Security Agency’s TREASURE MAP intelligence collection program. TREASURE MAP gathered information about networks including nodes, but not data created by users at the end nodes of the network. The application delineated the path to the ends. and physical ends, not merely virtual ends of the network. Continue reading
Map the entire Internet — any device, anywhere, all the time. — NSA TREASUREMAP PPT
Last week, The Intercept and Spiegel broke the story of NSA’s TREASUREMAP, an effort to map cyberspace, relying on both NSA’s defensive (IAD) and offensive (TAO) faces.
As Rayne laid out, it aspires to map out cyberspace down to the device level. As all great military mapping does, this will permit the US to identify strategic weaknesses and visualize a battlefield — even before many of adversaries realize they’re on a battlefield.
Against that background, NYT provided more details on the penetration of JP Morgan’s networks that has been blamed on Russia. The new details make it clear this was about reconnaissance, not — at least not yet — theft.
Over two months, hackers gained entry to dozens of the bank’s servers, said three people with knowledge of the bank’s investigation into the episode who spoke on the condition of anonymity. This, they said, potentially gave the hackers a window into how the bank’s individual computers work.
They said it might be difficult for the bank to find every last vulnerability and be sure that its systems were thoroughly secured against future attack.
The hackers were able to review information about a million customer accounts and gain access to a list of the software applications installed on the bank’s computers. One person briefed said more than 90 of the bank’s servers were affected, effectively giving the hackers high-level administrative privileges in the systems.
Hackers can potentially crosscheck JPMorgan programs and applications with known security weaknesses, looking for one that has not yet been patched so they can regain access.
Though the infiltrators did observe metadata — which, the NSA assures us, is not really all that compromising.
A fourth person with knowledge of the matter, also speaking on condition of anonymity, said hackers had not gained access to account holders’ financial information or Social Security numbers, and may have reviewed only names, addresses and phone numbers.
I’m not trying to make light of the mapping of one of America’s most important banks. Surely, such surveillance may enable the same kind of sophisticated attack we launched against Iran, having done similar kind of preparation.
But we should keep in mind what the US has been doing as we consider these reports. If and when Russia or Germany catch us conducting similar reconnaissance on the networks of their private companies, they will surely make a big stink, as we have been with JP Morgan (though the response to the Spiegel story has been muted enough I suspect Germany’s intelligence services knew about that one, particularly given NSA’s reliance on Germany for targets in Africa).
But if the US is going to treat digital reconnaissance as routine spying (and the President’s cyberwar Presidential Policy Directive makes it pretty clear we consider our own similar reconnaissance to be mere clandestine spying), then we should expect the same treatment of our most lucrative targets.
That doesn’t make it legal or acceptable. But that does make it equivalent to what we’re doing to the rest of the world.
One final point. If you’re going to map the entire Internet, any device, anywhere, by definition you need to map America’s Internet as well. Are we so sure our own Intelligence Community hasn’t been snooping in JP Morgan’s networks?
The most chilling part of this reporting is a network engineer’s reaction (see here on video) when he realizes he is marked or targeted as a subject of observation. He’s assured it’s not personal, it’s about the work he does – but his reaction still telegraphs stress. An intelligence agency can get to him, has gotten to him; he’s touchable.
The truth is that almost any of us who follow national security, cyber warfare, or information technology are potential subjects depending on our work or play.
The metadata we generate is only part of the observation process; it provides information about our individual patterns of behavior, but may not actually disclose where we are.
TREASURE MAP goes further, by providing the layout of the network on which any of us are generating metadata. But there is some other component either within TREASURE MAP, or within a complementary tool, that provides the physical address of any networked electronic device.
The NSA has the ability to track individuals not only by Internet Protocol addresses (IP addresses), but by media access control addresses (MAC addresses), according a recent interview with Snowden by James Bamford in Wired. This little nugget was a throwaway; perhaps readers already assumed this capability has existed, or didn’t understand the implications:
…But Snowden’s disenchantment would only grow. It was bad enough when spies were getting bankers drunk to recruit them; now he was learning about targeted killings and mass surveillance, all piped into monitors at the NSA facilities around the world. Snowden would watch as military and CIA drones silently turned people into body parts. And he would also begin to appreciate the enormous scope of the NSA’s surveillance capabilities, an ability to map the movement of everyone in a city by monitoring their MAC address, a unique identifier emitted by every cell phone, computer, and other electronic device.
In simple terms, IP addresses are like phone numbers — they are assigned. They can be static; a printer on a business network, for example, may be assigned a static address to assure it is always available to accept print orders at a stationary location. IP addresses may also be dynamic; if there’s an ongoing change in users on a network, allowing them to use a temporary address works best. Think of visits to your local coffee shop where customers use WiFi as an example. When they leave the premise, their IP address will soon revert to the pool available on the WiFi router. Continue reading
Have you noticed that every time someone covers all the patents Keith Alexander is getting for his cybersecurity boondoggle, the number of patents grows?
In this installment, it is 10.
IronNet is working with lawyers to draft as many as 10 patent applications in which the NSA would have no stake. Alexander said the “real key” to the patents was a person who never worked for the agency.
In addition to dispensing advice, IronNet is working with lawyers to draft as many as 10 patent applications that will include Alexander as co-inventor on one and “maybe a few others,” he said.
Of course, no matter how many patents it will be, Alexander is still left with the problem of explaining either why this isn’t stuff taxpayers paid for at NSA, or why Alexander didn’t implement these whiz-bang solutions while in charge of NSA.
So he’s inching closer and closer to one that might work: he’s going to patent having no knowledge.
Current cybersecurity strategies assume the defender knows what threats are present, and can quickly identify them by their digital profile, known as their signature. Alexander said IronNet’s approach is to counter those attacks as quickly as possible, without that prior knowledge.
“All the patents and stuff that people work on today assume knowledge of the threat,” he said. “What it means is a new approach. Something that’s never been used.”
It’s surely a novel approach — attacking perceived threats before you’re sure what that threat is. I’m just not sure how well it’s going to work.
While Alexander is busy shoring up his 10, 11, 12 patents, I think I’ll rush to copyright my new novel, in which a hubristic cybersecurity profiteer takes down the entire banking system by attacking core finance functions he identifies as attacks.
The other day, I noted the dodginess of the evidence behind claims that Russia had launched a sophisticated cyberattack on JP Morgan. I suggested one reason people like Mike Rogers might be crying wolf was to support a plan to reimburse the banks in case of a massive attack.
But there’s another, even more obvious explanation.
NATO just added cyberattacks to its definition of attacks that would merit a unified response. Citing Russia’s Special Forces tactics (the same ones we’re using in something like 80 places around the world), including its cyberattacks, General Phillip Breedlove today ratcheted up the fear of Russia. (h/t Joanne Leon)
Russia’s utilization of troops without national uniforms — the so-called “little green men” — and perhaps “the most amazing information warfare blitzkrieg we have ever seen in the history of information warfare” were part of the first Russian push in Ukraine, Breedlove said.
NATO members, especially the Baltic states that border Russia, must take into account such tactics as allies prepare for future threats, he said. That means steps should be taken to help build the capacity of other arms of government, such as interior ministries and police forces, to counter unconventional attacks, including propaganda campaigns, cyberassaults or homegrown separatist militias.
So go back to the alleged JP Morgan attack no one seems to have any evidence to substantiate. It was often attributed as arising somewhere in Eastern Europe. Which could be Russia — or Ukraine. Both countries, in fact, have significant numbers of organized criminals that launch fairly sophisticated cyberattacks.
How convenient, then, to ratchet up the cyberfear when unattributable attacks from the general region have been made a casus belli for the entire alliance.
On Sunday I asked who was crying wolf — JP Morgan itself, or Mike Rogers — about the claimed JP Morgan attack that might not be a serious attack at all and had been attributed to Russia without yet proof of that.
So who should crawl out of his sinecure but Keith Alexander?
Keith Alexander, the NSA director from 2005 until last March, said he had no direct knowledge of the attack though it could have been backed by the Russian government in response to sanctions imposed by the U.S. and EU over the crisis in Ukraine.
“How would you shake the United States back? Attack a bank in cyberspace,” said Alexander, a retired U.S. Army general who has started his own cybersecurity company to sell services to U.S. banks. “If it was them, they just sent a real message: ‘You’re vulnerable.’”
The hackers who attacked JPMorgan, the biggest U.S. bank, were “a group with exceptional skills or a nation-state backed group,” Alexander said in an interview yesterday at Bloomberg’s Washington bureau.
“If you wanted to send a message, do you think that was significant enough for the U.S. government to say one of the best banks that we have from a cybersecurity perspective was infiltrated by somebody?” Alexander asked. “And if they could get in to do that, even if they never use it, they could get in and collapse it. Does that cause you concern?”
Note how Alexander admits he has no personal knowledge of the attack but then opines about the skills of the hackers and goes from there to hypothesize how this was a response from Russia?
So maybe it wasn’t JP Morgan or Mike Rogers crying wolf. It sure looks like Alexander is willingly feeding the poorly evidenced claims about this hack.
But don’t worry, Keith Alexander doesn’t have a conflict of interest at all.
But as Jeremy Scahill tweeted last evening, read this piece by WaPo’s Barton Gellman on malicious code insertion. This news explains recent changes by Google to YouTube once it had been disclosed to the company that exploits could be embedded in video content as CitizenLab.org explains:
“… the appliance exploits YouTube users by injecting malicious HTML-FLASH into the video stream. …”
“… the user (watching a cute cat video) is represented by the laptop, and YouTube is represented by the server farm full of digital cats. You can observe our attacker using a network injection appliance and subverting the beloved pastime of watching cute animal videos on YouTube. …”
The questions this piece shake loose are Legion, but as just as numerous are the holes. Why holes? Because the answers are ugly and complex enough that one might struggle with them. Gellman’s done the best he can with nebulous material.
An interesting datapoint in the first graf of the story is timing — fall 2009.
You’ll recall that Google revealed the existence of a cyber attack code named Operation Aurora in January 2010, which Google said began in mid-December 2009.
You may also recall news of a large batch of cyber attacks in July of 2009 on South Korean targets.
The U.S. military had already experienced a massive uptick in cyber attacks in 1H2009, more than double the rate of the entire previous year.
And neatly sandwiched between these waves and events is a visit by a defense contractor CloudShield Technologies engineer from California, to Munich, Germany with British-owned Gamma Group. Continue reading