Posts

Twitter Only Had SMS 2FA When Hal Martin’s Twitter Account DMed Kaspersky

In a post late last month, I suggested that the genesis of FBI’s interest in Hal Martin may have stemmed from a panicked misunderstanding of DMs Martin sent.

What appears to have happened is that the FBI totally misunderstood what it was looking at (assuming, as the context seems to suggest, that this is a DM, it would be an account they were already monitoring closely), and panicked, thinking they had to stop Martin before he dropped more NSA files.

Kim Zetter provides the back story — or at least part of one. The FBI didn’t find the DMs on their own. Amazingly, Kaspersky Lab, which the government has spent much of the last four years demonizing, alerted NSA to them.

As Zetter describes, the DMs were cryptic, seemingly breaking in mid-conversation. The second set of DMs referenced the closing scenes of both the 2016 version of Jason Bourne and Inception.

The case unfolded after someone who U.S. prosecutors believe was Martin used an anonymous Twitter account with the name “HAL999999999” to send five cryptic, private messages to two researchers at the Moscow-based security firm. The messages, which POLITICO has obtained, are brief, and the communication ended altogether as abruptly as it began. After each researcher responded to the confusing messages, HAL999999999 blocked their Twitter accounts, preventing them from sending further communication, according to sources.

The first message sent on Aug. 13, 2016, asked for him to arrange a conversation with “Yevgeny” — presumably Kaspersky Lab CEO Eugene Kaspersky, whose given name is Yevgeny Kaspersky. The message didn’t indicate the reason for the conversation or the topic, but a second message following right afterward said, “Shelf life, three weeks,” suggesting the request, or the reason for it, would be relevant for a limited time.

The timing was remarkable — the two messages arrived just 30 minutes before an anonymous group known as Shadow Brokers began dumping classified NSA tools online and announced an auction to sell more of the agency’s stolen code for the price of $1 million Bitcoin. Shadow Brokers, which is believed to be connected to Russian intelligence, said it had stolen the material from an NSA hacking unit that the cybersecurity community has dubbed the Equation Group.

[snip]

The sender’s Twitter handle was not familiar to the Kaspersky recipient, and the account had only 104 followers. But the profile picture showed a silhouette illustration of a man sitting in a chair, his back to the viewer, and a CD-ROM with the word TAO2 on it, using the acronym of the NSA’s Tailored Access Operations. The larger background picture on the profile page showed various guns and military vehicles in silhouette.

The Kaspersky researcher asked the sender, in a reply message, if he had an email address and PGP encryption key they could use to communicate. But instead of responding, the sender blocked the researcher’s account.

Two days later, the same account sent three private messages to a different Kaspersky researcher.

“Still considering it..,” the first message said. When the researcher asked, “What are you considering?” the sender replied: “Understanding of what we are all fighting for … and that goes beyond you and me. Same dilemma as last 10 min of latest Bourne.” Four minutes later he sent the final message: “Actually, this is probably more accurate” and included a link to a YouTube video showing the finale of the film “Inception.”

As it is, it’s an important story. As Zetter lays out, it makes it clear the NSA didn’t — couldn’t — find Martin on its own, and the government kept beating up Kaspersky even after they helped find Martin.

But, especially given the allusions to the two movies, I wonder whether these DMs actually came from Martin at all. There’s good reason to wonder whether they actually come from Shadow Brokers directly.

Certainly, that’d be technically doable, even though court filings suggest Martin had far better operational security than your average target. It would take another 16 months before Twitter offered Authenticator 2 factor authorization. For anyone with the profile of Shadow Brokers, it would be child’s play to break SMS 2FA, assuming Martin used it.

Moreover, the message of the two allusions fits solidly within both the practice of cultural allusions as well as the themes employed by Shadow Brokers made over the course of the operation, allusions that have gotten far too little notice.

Finally, that Kaspersky would get DMs from someone hijacking Martin’s account would be consistent with other parts of the operation. From start to finish, Shadow Brokers used Kaspersky as a foil, just like it used Jake Williams. With Kaspersky, Shadow Brokers repeatedly provided reason to think that the security company had a role in the leak. In both cases, the government clearly chased the chum Shadow Brokers threw out, hunting innocent people as suspects, rather than looking more closely at what the evidence really suggested. And (as Zetter lays out), Martin would be a second case where Kaspersky was implicated in the identification of such chum, the other being Nghia Pho (the example of whom might explain why the government responded to Kaspersky’s help in 2016 with such suspicion).

Mind you, there’s nothing in the public record — not Martin’s letter asking for fully rendered versions of his social media so he could prove the context, and not Richard Bennett’s opinion ruling the warrants based off Kaspersky’s tip were reasonable, even if the premise behind them proved wrong — that suggests Martin is contesting that he sent those DMs. That said, virtually the entire case is sealed, so we wouldn’t know (and the government really wouldn’t want us to know if it were the case).

As Zetter also lays out, Martin had a BDSM profile that might have elicited attention from hostile entities looking for such chum.

A Google search on the Twitter handle found someone using the same Hal999999999 username on a personal ad seeking female sex partners. The anonymous ad, on a site for people interested in bondage and sado-masochism, included a real picture of Martin and identified him as a 6-foot-4-inch 50-year-old male living in Annapolis, Md. A different search led them to a LinkedIn profile for Hal Martin, described as a researcher in Annapolis Junction and “technical advisor and investigator on offensive cyber issues.” The LinkedIn profile didn’t mention the NSA, but said Martin worked as a consultant or contractor “for various cyber related initiatives” across the Defense Department and intelligence community.

And when Kaspersky’s researchers responded to Martin’s DM, he blocked their accounts, suggesting he treated the communications unfavorably (or, if someone had taken over the account, they wanted to limit any back-and-forth, though Martin would presumably have noted that).

After each researcher responded to the confusing messages, HAL999999999 blocked their Twitter accounts, preventing them from sending further communication, according to sources.

Martin’s attorneys claim he has a mental illness that leads him to horde things, which is the excuse they give for his theft of so many government files. That’s different than suggesting he’d send strangers out-of-context DMs that, at the very least, might make him lose his clearance.

So I’d like to suggest it’s possible that Martin didn’t send those DMs.

Kaspersky’s Carrot-and-Stick TAO Compromise Incident Report

Last week, Kaspersky released its investigation into the reported collection of NSA hacking tools off an employee’s computer. Kim Zetter did an excellent story on it, so read that for analysis of what the report said.

The short version, though, is that Kaspersky identified a computer in the Baltimore, MD area that was sending a whole slew of alerts in response to a silent signature for Equation Group software from September to November 2014 — a year earlier than the leaked reports about the incident claimed the compromise had happened. Kaspersky pulled in an archive including those signatures as well as some associated files in the normal course of collecting analysis (and, according to Zetter, did not pull other archives of malware also associated with the machine). Kaspersky IDed it as irregular, and — so they’re claiming — the analyst who found it told Eugene Kaspersky (referred to throughout in the third person “CEO” here), who told told the analyst to destroy the source code and related documents immediately. The report claims Kaspersky subsequently instituted a policy mandating such destruction going forward.

As Zetter notes, the timing of events gets awfully murky about when the file got destroyed and the new destruction policy was instituted.

The company didn’t respond to questions about when precisely it instituted this policy, nor did it provide a written copy of the distributed policy before publication of this article.

Meanwhile, during the same period this machine was sending out all the Equation Group alerts, someone hacked it.

It appears the system was actually compromised by a malicious actor on October 4, 2014 at 23:38 local time,

The report explains this compromise at length, providing (in addition to the precise time), the C&C server URL, a list of 121 other virus signatures found on the machine during the period the Equation Group signatures were alerting. It also links to Kaspersky’s analysis of the backdoor in question, which was developed by Russian criminal hackers.

“It looks like a huge disaster the way it happened with running all this malware on his machine. It’s almost unbelievable,” [Zetter quotes Kaspersky’s director of the company’s Global Research and Analysis Team Costin Raiu].

Thus far, consider what this report does: it makes it clear that Kaspersky has far more detail about the compromise than the anonymous sources leaking to the press are willing to share (all the time with Eugene Kaspersky inviting them to provide more details). It elaborates on the story it had already shared about who the likely culprit was to have stolen and used the files. And it suggests (though I’m not sure I believe it), that it’s entirely the fault of the hacker who turned off Kaspersky’s AV in order to run a pirated copy of Windows Office.

That’s the carrot. Here, Kaspersky is saying, we’ve figured out who stole those files your idiot developer loaded onto his malware-riddled computer. Go get them. Free incident response, three years after the fact!

But it’s the stick I’m just as interested in.

First, as part of its explanation of the process Kaspersky used to hone in on the incident, the report includes a list of hits and false positives on NSA signatures just from September 2014 — effectively providing a list of (dated) malware signatures. While the report notes many of these alerts are false positives, Kaspersky is nevertheless saying, here’s a list of all the victims of your spying we identified for just one month out of the 40 months we just analyzed. Presumably, the hits after September 2014 would have come to include far more true victims.

Then, the report provides a list of all the Equation Group signatures found on the TAO engineers’ computer, providing a snapshot of what one person might work on, a snapshot that would provide useful for those trying to understand NSA’s work patterns.

Even while it provides lists of signatures that will provide others some insight into NSA activity, the report makes a grand show of concern for privacy, redacting the name of the archive as [undisclosed] and including a discussion about how it could have — but chose not to — include the complete file paths of the archive.

Looking at this metadata during current investigation we were tempted to include the full list of detected files and file paths into current report, however, according to our ethical standards, as well as internal policies, we cannot violate our users’ privacy. This was a hard decision, but should we make an exception once, even for the sake of protecting our own company’s reputation, that would be a step on the route of giving up privacy and freedom of all people who rely on our products. Unless we receive a legitimate request originating from the owner of that system or a higher legal authority, we cannot release such information.

Mind you, FSB is the “higher legal authority” in Russia for such things.

Then, in the guise of claiming how little information Kaspersky has on the individual behind all this, the report makes it clear it retains his IP, from which they could reconstitute his identity.

Q3 – Who was this person?

A3 – Because our software anonymizes certain aspects of users’ information, we are unable to pinpoint specifically who the user was. Even if we could, disclosing such information is against our policies and ethical standards. What we can determine is that the user was originating from an IP address that is supposedly assigned to a Verizon FiOS address pool for the Baltimore, MD and surrounding area.

In short, along with providing a detailed description of what likely happened — the hacker got pwned by someone else — Kaspersky lays out all the information on NSA’s hacking activities that it could, if it so chose, make public: who NSA hacked when, who the developer in question is, and more details on how the NSA develops its tools.

But (in the interest of privacy, you understand?) Kaspersky’s not going to do that unless some higher authority forces it to.

Of course, Kaspersky’s collection of all that data on NSA’s hacking is undoubtedly one of the reasons the NSA would prefer it not exist.

A carrot, and a stick.

At the end of her piece, Zetter quotes Rob Joyce laying out the more modest attack on Kaspersky (this stuff shouldn’t be run on sensitive government computers, which it shouldn’t), even while admitting that other AV products have the same privileged access to collect such information on users.

Asked about Kaspersky’s discovery of multiple malware samples on the NSA worker’s home computer, Rob Joyce, the Trump administration’s top cybersecurity adviser who was head of the NSA’s elite hacking division when the TAO worker took the NSA files home and put them on his work computer, declined to respond to Kaspersky’s findings but reiterated the government’s contention that Kaspersky software should be banned from government computers.

“Kaspersky as an entity is a rootkit you run on a computer,” he told Motherboard, using the technical term for stealth and persistent malware that has privileged access to all files on a machine.

He acknowledged that software made by other antivirus companies has the same potential for misuse Kaspersky has but said, Kaspersky is “a Russian company subjected to FSB control and law, and the US government is not comfortable accepting that risk on our networks.”

We shall see if this report serves to halt all the (inaccurate at least with respect to timing, if this report is to be believed) leaks to the press or even the other attacks on Kaspersky.

All that said, there are two parts of this story that still don’t make sense.

First, I share Zetter’s apparent skepticism about the timing of the decision to destroy the source code, which the report describes this way:

Upon further inquiring about this event and missing files, it was later discovered that at the direction of the CEO, the archive file, named “[undisclosed].7z” was removed from storage. Based on description from the analyst working on that archive, it contained a collection of executable modules, four documents bearing classification markings, and other files related to the same project. The reason we deleted those files and will delete similar ones in the future is two-fold; We don’t need anything other than malware binaries to improve protection of our customers and secondly, because of concerns regarding the handling of potential classified materials. Assuming that the markings were real, such information cannot and will not [note this typo] consumed even to produce detection signatures based on descriptions.

This concern was later translated into a policy for all malware analysts which are required to delete any potential classified materials that have been accidentally collected during anti-malware research or received from a third party. Again to restate: to the best of our knowledge, it appears the archive files and documents were removed from our storage, and only individual executable files (malware) that were already detected by our signatures were left in storage.

The key sentence — “it was later discovered … the archive file … was removed” — is a master use of the passive voice. And unlike all the other things for which the report offers affirmative data, the data offered here is the absence of data. “It appears” that the archive is no longer in storage, without any details about when it got removed. The report is also silent about whether any of these events — the removal and claimed destruction and the institution of a new policy to destroy such things going forward — were a response to the Duqu 2 hack discovering such files, as well as the one silent signature integrating the word “secret” described elsewhere in the report, on Kaspersky’s servers.

Then there’s the implausibility of an NSA developer 1) running Kaspersky then 2) turning it off 3) to load a bunch of malware onto his computer in the guise of loading a pirated copy of Office 4) only to have a bunch of other malware infect the computer in the same window of time, finally 5) turning the Kaspersky back on to discover what happened after the fact.

Really? I mean, maybe this guy is that dumb, or maybe there’s another explanation for these forensic details.

In any case, the entire report is a cheeky chess move. I eagerly wait to see if the US’ anonymous leakers respond.

 

Shorter Kaspersky: Our Home AV Found NSA’s Lost Tools Six Months Before NSA Did

Kaspersky has what it calls a preliminary investigation into the allegations that it obtained NSA tools by taking them from an NSA hacker who loaded them onto his home computer. It follows by just a few days and directly refutes the silly accusations made by Rick Ledgett the other day in Lawfare, most notably that Kaspersky found the tools by searching on “TS/SCI,” much less the “proprietary” Ledgett claimed. I assume the word “preliminary” here means, “Okay, you’ve made your public accusation, now Imma badly discredit you, but I’m holding other details back for your next accusation.”

Instead of finding the hacking tools in early 2015, Kaspersky says, they found the GrayFish tool back on September 11, 2014, probably six months before the anonymous government sources have been saying it was discovered.

And they found it with their home AV.

  • The incident where the new Equation samples were detected used our line of products for home users, with KSN enabled and automatic sample submission of new and unknown malware turned on.
  • The first detection of Equation malware in this incident was on September 11 2014. The following sample was detected:
    • 44006165AABF2C39063A419BC73D790D
    • mpdkg32.dll
    • Verdict: HEUR:Trojan.Win32.GrayFish.gen

After that, what Kaspersky describes as “the user” disabled the AV and downloaded a pirated Microsoft copy onto his computer, which created a backdoor that could have been used by anyone.

  • After being infected with the Backdoor.Win32.Mokes.hvl malware, the user scanned the computer multiple times which resulted in detections of new and unknown variants of Equation APT malware.

Once that backdoor was loaded, “the user” scanned the computer and found other Equation Group tools.

What Kaspersky is not saying is that this probably wasn’t the TAO hacker, but probably was someone pretending to be the user (perhaps using NSA’s own tools?!), who stole a slew of files then.

Two other points: Kaspersky claims to have called the cops — or probably the FBI, which would have been the appropriate authority, and he claims to call the cops whenever they find malware in the US.

  • Some of these infections have been observed in the USA.
  • As a routine procedure, Kaspersky Lab has been informing the relevant U.S. Government institutions about active APT infections in the USA.

It’s possible that Kaspersky did inform the FBI, and that FBI routinely gets such notice, but that FBI routinely ignores such notice because they don’t care if NSA is hacking people in the US (which given what we know, is at least sometimes, and would have been during this period, Americans approved for 705(b) surveillance that doesn’t get turned off as is legally required when they return to the US).

In other words, it’s possible that FBI learned about this, but ignored it because they ignore NSA’s illegal hacking the US. Only this time it wasn’t NSA’s illegal hacking, but NSA’s incompetence, which in turn led an NSA hacker to get hacked by … someone else.

Finally, there’s this bit, which is the least credible thing in this announcement. The Kaspersky statement says Eugene himself was informed of the discovery, and ordered the tool (in a kind of one-man Vulnerabilities Equities Process) to be destroyed.

  • After discovering the suspected Equation malware source code, the analyst reported the incident to the CEO. Following a request from the CEO, the archive was deleted from all our systems. The archive was not shared with any third parties.

I don’t so much doubt that Eugene ordered the malware to be destroyed. Once Kaspersky finished its analysis of the tool, they would have no use for it, and it would add to risk for Kaspersky itself. I just find it remarkable that he would have made the personal decision to destroy this malware at some point after its discovery, but not have raised it until now.

Unless, of course, he was just waiting for someone like Rick Ledgett to go on the sort of record.

Though note how Kaspersky gets conspicuously silent about the timing of that part of the story.

One final point: this new timeline doesn’t explain how Israel (possibly with the involvement of the US) would have found this tool by hacking Kaspersky (unless the decision to destroy the tool came after Kaspersky discovered the hack). But it does suggest the Duqu chicken was chasing the TAO hacker egg, and not vice versa as anonymous sources have been claiming.

That is, the scenario laid out by this timeline (which of course, with the notable exceptions of the Duqu hack and the destruction date for GrayFish, comes with dates and file names and so at least looks more credible than Rick Ledgett’s farcical “proprietary” claims) is that Kaspersky found the file, reported it as an infection to the cops, which likely told NSA about it, leading to the attack on Kaspersky to go try to retrieve it or discover how much else they obtained. That is, Duqu didn’t hack Kaspersky and then find the file. They hacked Kaspersky to find the file that some dopey TAO hacker had made available by running Kaspersky home AV on his computer.

Update: Changed “probable” involvement of US in Duqu hack to “possible.”

Update: Changed “stolen” in title to “lost.”

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

Rick Ledgett Claims NSA’s Malware Isn’t Malware

I was beginning to be persuaded by all the coverage of Kaspersky Labs that they did something unethical with their virus scans.

Until I read this piece from former NSA Deputy Director Rick Ledgett. In it, he defines the current scandal as Kaspersky being accused of obtaining NSA hacking tools via its anti-virus.

Kaspersky Lab has been under intense fire recently for allegedly using, or allowing Russian government agents to use, its signature anti-virus software to retrieve supposed National Security Agency tools from the home computer of an NSA employee.

He then describes both Jeanne Shaheen’s efforts to prohibit KAV use on government computers, and Eugene Kaspersky’s efforts to defend his company. Ledgett than describes how anti-virus works, ending with the possibility that an AV company can use its filters to search on words like “secret” or “confidential” or “proprietary” (as if NSA’s hacking tools were only classified proprietary).

This all makes perfect sense for legitimate anti-virus companies, but it’s also a potential gold mine if misused. Instead of looking for signatures of malware, the software can be instructed to look for things like “secret” or “confidential” or “proprietary”—literally anything the vendor desires. Any files of interest can be pulled back to headquarters under the pretext of analyzing potential malware.

He then claims that’s what Kaspersky is accused of doing.

So that is what Kaspersky has been accused of doing: using (or allowing to be used) its legitimate, privileged access to a customer’s computer to identify and retrieve files that were not malware.

Except, no, it’s not.

The only things Kaspersky is accused of having retrieved are actual hacking tools. Which, if anyone besides the NSA were to use them, would obviously be called malware. As Kim Zetter explains KAV and other AV firms use silent signatures to search for malware.

Silent signatures can lead to the discovery of new attack operations and have been used by Kaspersky to great success to hunt state-sponsored threats, sometimes referred to as advanced persistent threats, or APTs. If a Kaspersky analyst suspects a file is just one component in a suite of attack tools created by a hacking group, they will create silent signatures to see if they can find other components related to it. It’s believed to be the method Kaspersky used to discover the Equation Group — a complex and sophisticated NSA spy kit that Kaspersky first discovered on a machine in the Middle East in 2014.

It’s unclear whether Kaspersky found the malware by searching on “TS/SCI,” actual tool names (which NSA stupidly uses in its code), or code strings that NSA reuses from one program to another.

“[D]ocuments can contain malware — when you have things like macros and zero-days inside documents, that is relevant to a cybersecurity firm,” said Tait, who is currently a cybersecurity fellow at the Robert S. Strauss Center for International Security and Law at the University of Texas at Austin. “What’s not clear from these stories is what precisely it was that they were looking for. Are they looking for a thing that is tied to NSA malware, or something that clearly has no security relevance, but intelligence relevance?”

If Kaspersky was searching for “top secret” documents that contained no malicious code, then Tait said the company’s actions become indefensible.

“In the event they’re looking for names of individuals or classification markings, that’s not them hunting malware but conducting foreign intelligence. In the event that the U.S. intelligence community has reason to believe that is going on, then they should … make a statement to that effect,” he said, not leak anonymously to reporters information that is confusing to readers.

Kaspersky said in a statement to The Intercept that it “has never created any detection in its products based on keywords like ‘top secret’, or ‘classified.’”

One thing no one has discussed is whether Kaspersky could have searched on NSA’s encryption, because that’s how Kaspersky has always characterized NSA’s tools, by their developers’ enthusiasm for encryption.

In any case, what’s clear is no one would ever find a piece of NSA malware by searching on the word “proprietary,” so we can be sure that’s a bogus accusation.

I asked Susan Hennessey on Twitter, and she confirms that NSA did a prepublication review of this, so any “new” news in this is either bullshit (as the claim Kaspersky searched on the word “proprietary” surely is) or “no[t] inadvertent declassification,” meaning NSA wanted Ledgett to break new news.

Which I take to mean that Ledgett is pretending that NSA’s malware is not malware but … Democracy Ponies or something like that. American exceptionalism, operating at the level of code.

Anyway, Ledgett goes on to suggest that Kaspersky can get beyond this taint by agreeing to let others spy on their malware detection to make sure it’s all legit. Except that is precisely what we’re all worried Russia did against Kaspersky, find malware as it transited from the TAO guy back to Kaspersky’s servers!

If Eugene Kaspersky really wanted to assuage the fears of customers and potential customers, he would instead have all communications between the company’s servers and the 400 million or so installations on client machines go through an independent monitoring center. That way evaluators could see what commands and software updates were going from Kaspersky headquarters to those clients and what was being sent back in response. Of course, the evaluators would need to sign non-disclosure agreements to protect Kaspersky’s intellectual property, but they would be expected to reveal any actual misuse of the software. It’s a bold idea, but it’s the only way anyone can be sure of what the company is actually doing, and the only real way to regain trust in the marketplace. Let’s see if he does it.

What are the chances that NSA would have this “independent monitoring center” pwned within 6 hours, if it really even operated independently of NSA?

Like I said, I was beginning to be persuaded that Kaspersky did something wrong. But this Ledgett piece leads me to believe this is just about American exceptionalism, just an attempt to protect NSA’s spying from one of the few AV companies that will dare to spy on it.

Kaspersky and the Third Major Breach of NSA’s Hacking Tools

The WSJ has a huge scoop that many are taking to explain why the US has banned Kaspersky software.

Some NSA contractor took some files home in (the story says) 2015 and put them on his home computer, where he was running Kaspersky AV. That led Kaspersky to discover the files. That somehow (the story doesn’t say) led hackers working for the Russian state to identify and steal the documents.

Hackers working for the Russian government stole details of how the U.S. penetrates foreign computer networks and defends against cyberattacks after a National Security Agency contractor removed the highly classified material and put it on his home computer, according to multiple people with knowledge of the matter.

The hackers appear to have targeted the contractor after identifying the files through the contractor’s use of a popular antivirus software made by Russia-based Kaspersky Lab, these people said.

The theft, which hasn’t been disclosed, is considered by experts to be one of the most significant security breaches in recent years. It offers a rare glimpse into how the intelligence community thinks Russian intelligence exploits a widely available commercial software product to spy on the U.S.

The incident occurred in 2015 but wasn’t discovered until spring of last year, said the people familiar with the matter.

The stolen material included details about how the NSA penetrates foreign computer networks, the computer code it uses for such spying and how it defends networks inside the U.S., these people said.

Having such information could give the Russian government information on how to protect its own networks, making it more difficult for the NSA to conduct its work. It also could give the Russians methods to infiltrate the networks of the U.S. and other nations, these people said.

Way down in the story, however, is this disclosure: US investigators believe Kaspersky’s AV identified the files, but isn’t sure whether Kaspersky told the Russian government.

U.S. investigators believe the contractor’s use of the software alerted Russian hackers to the presence of files that may have been taken from the NSA, according to people with knowledge of the investigation. Experts said the software, in searching for malicious code, may have found samples of it in the data the contractor removed from the NSA.

But how the antivirus system made that determination is unclear, such as whether Kaspersky technicians programed the software to look for specific parameters that indicated NSA material. Also unclear is whether Kaspersky employees alerted the Russian government to the finding.

Given the timing, it’s worth considering several other details about the dispute between the US and Kaspersky. (This was all written for another post that I’ll return to.)

The roots of Kaspersky’s troubles in 2015

Amid the reporting on Eugene Kaspersky’s potential visit to testify to Congress, Reuters reported the visit would be Kaspersky’s first visit to the US since spring 2015.

Kaspersky told NBC News in July that he was not currently traveling to the United States because he was “worried about some unexpected problems” if he did, citing the “ruined relationship” between Moscow and Washington.

Kaspersky Lab did not immediately respond when asked when its chief executive was last in the United States. A source familiar with U.S. inquiries into the company said he had not been to the United States since spring of 2015.

A link in that Reuters piece suggests Kaspersky’s concern dates back to August 2015 Reuters reporting, based off leaked emails and interviews with former Kaspersky employees, that suggests the anti-virus firm used fake files to trick its competitors into blocking legitimate files, all in an effort to expose their theft of Kaspersky’s work. A more recent reporting strand, again based on leaked emails, dates to the same 2009 time period and accuses Kaspersky of working with FSB (which in Russia, handles both spying and cybersecurity — though ostensibly again, that’s how the FBI works here).

But two events precede that reporting. In June 2015, Kaspersky revealed that it (and a bunch of locales where negotiations over the Iran deal took place) had been infected by Duqu 2.0, a thread related to StuxNet.

Kaspersky says the attackers became entrenched in its networks some time last year. For what purpose? To siphon intelligence about nation-state attacks the company is investigating—a case of the watchers watching the watchers who are watching them. They also wanted to learn how Kaspersky’s detection software works so they could devise ways to avoid getting caught. Too late, however: Kaspersky found them recently while testing a new product designed to uncover exactly the kind of attack the intruders had launched.

[snip]

Kaspersky is still trying to determine how much data the attackers stole. The thieves, as with the previous Duqu 2011 attack, embedded the purloined data inside blank image files to slip it out, which Raiu says “makes it difficult to estimate the volume of information that was actually transferred.” But at least, he says, it doesn’t appear that the attackers were out to infect Kaspersky customers through its networks or products. Kaspersky claims to have more than 400 million users worldwide.

Which brings us to what the presumed NSA hackers were looking for:

The attackers were primarily interested in Kaspersky’s work on APT nation-state attacks–especially with the Equation Group and Regin campaigns. Regin was a sophisticated spy tool Kaspersky found in the wild last year that was used to hack the Belgian telecom Belgacom and the European Commission. It’s believed to have been developed by the UK’s intelligence agency GCHQ.

The Equation Group is the name Kaspersky gave an attack team behind a suite of different surveillance tools it exposed earlier this year. These tools are believed to be the same ones disclosed in the so-called NSA ANT catalogue published in 2013 by journalists in Germany. The interest in attacks attributed to the NSA and GCHQ is not surprising if indeed the nation behind Duqu 2.0 is Israel.

Kaspersky released its Equation Group whitepaper in February 2015. It released its Regin whitepaper in November 2014.

One thing that I found particularly interesting in the Equation Group whitepaper — in re-reading it after ShadowBrokers released a bunch of Equation Group tools — is that the report offers very little explanation of how Kaspersky was able to find so many samples of the NSA malware that the report makes clear is almost impossible to find. The only explanation is this CD attack.

One such incident involved targeting participants at a scientific conference in Houston. Upon returning home, some of the participants received by mail a copy of the conference proceedings, together with a slideshow including various conference materials. The compromised CD-ROM used “autorun.inf” to execute an installer that began by attempting to escalate privileges using two known EQUATION group exploits. Next, it attempted to run the group’s DOUBLEFANTASY implant and install it onto the victim’s machine. The exact method by which these CDs were interdicted is unknown. We do not believe the conference organizers did this on purpose. At the same time, the super-rare DOUBLEFANTASY malware, together with its installer with two zero-day exploits, don’t end up on a CD by accident.

But none of the rest of the report explains how Kaspersky could have learned so much about NSA’s tools.

We now may have our answer: initial discovery of NSA tools led to further discovery using its AV tools to do precisely what they’re supposed to. If some NSA contractor delivered all that up to Kaspersky, it would explain the breadth of Kaspersky’s knowledge.

It would also explain why NSA would counter-hack Kaspersky using Duqu 2.0, which led to Kaspersky learning more about NSA’s tools.

So to sum up, Eugene Kaspersky’s reluctance to visit the US dates back to a period when 1) Kaspersky’s researchers released detailed analysis of some of NSA and GCHQ’s key tools, which seems to have led to 2) an NSA hack of Kaspersky, which in turn shortly preceded 3) some reporting based off unexplained emails floating accusations of unfair competition dating back to 2009 and earlier.

We now know all that came after Kaspersky found at least some of these tools sitting on some NSA contractor’s home laptop.

This still doesn’t explain how Russian hackers figured out precisely where Kaspersky was getting this information from — which is a real question, but not one the WSJ piece answers.

But reading those reports again, especially the Equation Group one, should make it clear how the Russian government could have discovered that Kaspersky had discovered these tools.

Vaporous Voids: Questions Remain About Duqu 2.0 Malware

Cybersecurity_MerrillCollegeofJournalismThe use of stolen Foxconn digital certificates in Duqu 2.0 gnaws at me, but I can’t put my finger on what exactly disturbs me. As detailed as reporting has been, there’s not enough information about this malware’s creation. Nor is there enough detail about its targeting of Kaspersky Lab and the P5+1 talks with Iran.

Kaspersky Lab carefully managed release of Duqu 2.0 news — from information security firm’s initial post and an op-ed, through the first wave of media reports. There’s surely information withheld from the public, about which no other entities know besides Kaspersky Lab and the hackers.

Is it withheld information that nags, leaving vaporous voids in the story’s context? Possibly.

But there are other puzzle pieces floating around without a home, parts that fit into a multi-dimensional image. They may fit into this story if enough information emerges.

Putting aside how much Duqu 2.0 hurts trust in certificates, how did hackers steal any from Foxconn? Did the hackers break into Foxconn’s network? Did they intercept communications to/from Foxconn? Did they hack another certificate authority?

If they broke into Foxconn, did they use the same approach the NSA used to hack Syria — with success this time? You may recall the NSA try to hack Syria’s communications in 2012, by inserting an exploit into a router. But in doing so, the NSA bricked the router. Because the device was DOA, the NSA could not undo its work and left evidence of hacking behind. The router’s crash took out Syria’s internet. Rapid recovery of service preoccupied the Syrians so much that they didn’t investigate the cause of the crash.

The NSA was ready to deny the operation, though, should the Syrians discover the hack:

…Back at TAO’s operations center, the tension was broken with a joke that contained more than a little truth: “If we get caught, we can always point the finger at Israel.”

Did the NSA’s attempted hack of Syria in 2012 provide direction along with added incentive for Duqu 2.0? The failed Syria hack demonstrated evidence must disappear with loss of power should an attempt crash a device — but the malware must have adequate persistence in targeted network. NSA’s readiness to blame Israel for the failed Syria hack may also have encouraged a fuck-you approach to hacking the P5+1 Iran talks. Read more

You Were Warned: Cybersecurity Expert Edition — Now with Space Stations

Over the last handful of days breathless reports may have crossed your media streams about Stuxnet infecting the International Space Station.

The reports were conflations or misinterpretations of cybersecurity expert Eugene Kaspersky’s recent comments before the Australian Press Club in Canberra. Here’s an excerpt from his remarks, which you can enjoy in full in the video embedded above:

[26:03] “…[government] departments which are responsible for the national security for national defense, they’re scared to death. They don’t know what to do. They do understand the scenarios. They do understand it is possible to shut down power plants, power grids, space stations. They don’t know what to do. Uh, departments which are responsible for offense, they see it as an opportunity. They don’t understand that in cyberspace, everything you do is [a] boomerang. It will get back to you.

[26:39] Stuxnet, which was, I don’t know, if you believe American media, it was written, it was developed by American and Israel secret services, Stuxnet, against Iran to damage Iranian nuclear program. How many computers, how many enterprises were hit by Stuxnet in the United States, do you know? I don’t know, but many.

Last year for example, Chevron, they agreed that they were badly infected by Stuxnet. A friend of mine, work in Russian nuclear power plant, once during this Stuxnet time, sent a message that their nuclear plant network, which is disconnected from the internet, in Russia there’s all that this [cutting gestures, garbled], so the man sent the message that their internal network is badly infected with Stuxnet.

[27:50] Unfortunately these people who are responsible for offensive technologies, they recognize cyber weapons as an opportunity. And a third category of the politicians of the government, they don’t care. So there are three types of people: scared to death, opportunity, don’t care.”

He didn’t actually say the ISS was infected with Stuxnet; he only suggested it’s possible Stuxnet could infect devices on board. Malware infection has happened before when a Russian astronaut brought an infected device used on WinXP machines with her to the station.

But the Chevron example is accurate, and we’ll have to take the anecdote about a Russian nuclear power plant as fact. We don’t know how many facilities here in the U.S. or abroad have been infected and negatively impacted as only Chevron to date has openly admitted exposure. It’s not a stretch to assume Stuxnet could exist in every manner of facility using SCADA equipment combined with Windows PCs; even the air-gapped Russian nuclear plant, cut off from the internet as Kaspersky indicates, was infected.

The only thing that may have kept Stuxnet from inflicting damage upon infection is the specificity of the encrypted payload contained in the versions released in order to take out Iran’s Natanz nuclear facility. Were the payload(s) injected with modified code to adapt to their host environs, there surely would have been more obvious enterprise disruptions.

In other words, Stuxnet remains a ticking time bomb threatening energy and manufacturing production at a minimum, and other systems like those of the ISS at worst case. Read more