StuxNet: Covert Op-Exposing Code In, Covert Op-Exposing Code Out

In this interview between David Sanger and Jake Tapper, Sanger makes a striking claim: that he doesn’t know who leaked StuxNet.

I’ll tell you a deep secret. Who leaked the fact? Whoever it was who programmed this thing and made a mistake in it in 2010 so that the bug made it out of the Natanz nuclear plant, got replicated around the world so the entire world could go see this code and figure out that there was some kind of cyberattack underway. I have no idea who that person was. It wasn’t a person, it wasn’t a person, it was a technological error.

At one level, Sanger is just making the point I made here: the age of cyberwar may erode even very disciplined Administration attempts to cloak their covert operations in secrecy. Once StuxNet got out, it didn’t take Administration (or Israeli) sources leaking to expose the program.

But I’m amused that Sanger claims he doesn’t know who leaked the information because he doesn’t know who committed the “technological error” that allowed the code to escape Natanz. I find it particularly amusing given that Dianne Feinstein recently suggested Sanger misled her about what he would publish (while not denying she might call for jailing journalists who report such secrets).

What you have are very sophisticated journalists. David Sanger is one of the best. I spoke–he came into my office, he saw me, we’ve worked together at the Aspen Strategy Institute. He assured me that what he was publishing he had worked out with various agencies and he didn’t believe that anything was revealed that wasn’t known already. Well, I read the NY Times article and my heart dropped because he wove a tapestry which has an impact that’s beyond any single one thing. And he’s very good at what he does and he spent a year figuring it all out.

Sanger claims, now that DiFi attacked him, he doesn’t know who made this “technological error.”

But that’s not what he said in his article, as I noted here. His article clearly reported two sources–one of them a quote from Joe Biden–blaming the Israelis.

An error in the code, they said, had led it to spread to an engineer’s computer when it was hooked up to the centrifuges. When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world. Suddenly, the code was exposed, though its intent would not be clear, at least to ordinary computer users.

“We think there was a modification done by the Israelis,” one of the briefers told the president, “and we don’t know if we were part of that activity.”

Mr. Obama, according to officials in the room, asked a series of questions, fearful that the code could do damage outside the plant. The answers came back in hedged terms. Mr. Biden fumed. “It’s got to be the Israelis,” he said. “They went too far.”

And even though Sanger calls this code an “error,” the quotations he includes show that the President’s briefer and Joe Biden believe it was not an error at all.

In this post, I suggested that the Israelis coded StuxNet to escape, without telling the Americans, so as to undermine American attempts to occupy them with cyberwar to prevent hot war. That is, the implication of Sanger’s article (which he now seems to be trying to retract) is that the Israelis deliberately exposed our cyberwar attack so as to make it more likely they could start a war with Iran.

But there is a far more ominous possibility. The Russians, based on analysis they did at Iran’s Bushehr nuclear plant, have claimed StuxNet might have–and still might–cause Bushehr to explode, effectively setting off a nuclear bomb using code.

Is DiFi so angry at Sanger because he ham-handedly revealed that the Israelis deliberately turned StuxNet into a potential WMD?

87 replies
  1. MadDog says:

    …Is DiFi so angry at Sanger because he ham-handedly revealed that the Israelis deliberately turned StuxNet into a potential WMD?

    Or is DiFi so angry because Sanger easily conned her into confirming and leaking that Stuxnet was ours and the Israelis?


  2. The Raven says:

    I think it’s likely to be an unintended error, though one can’t yet rule out malice. It’s incredibly easy to make mistakes in software development. Finger-pointing once they are made is common too: “The Israelis did it,” “The Americans did it.” No-one wants the bug that made the disaster to be their fault, after all, and since it’s so easy to make errors which you don’t notice, it’s easy to believe that the error was Someone Else’s Fault. There is also a similarity to biological warfare: it is easy for a computer virus to escape, and attack systems it was not aimed at.

    I don’t think the Russian analysis is plausible here: it seems to me fear-mongering. The much more likely result with most software bugs, and likely with this one, is the result that we have actually seen: systems failure.

  3. emptywheel says:

    @MadDog: No, it is assuredly not the latter. He and William Broad did that over a year ago and no one cared in the least.

  4. emptywheel says:

    @The Raven: There are two separate questions. 1) Did the Israelis intentionally recode StuxNet to let it free. The language here suggests we believe they did. 2) Did they do it to blow up Bushehr. I agree that’s less likely. But I also suspect that’s what the Russians think they did, and they happen to be pretty good at understanding code.

  5. Frank33 says:

    StuxNet might have–and still might–cause Bushehr to explode, effectively setting off a nuclear bomb using code.

    Future Computer Games are so exciting,, and now Netbots can take over Nuclear Facilities. Or the future is HERE. Let us hope all the nuclear power plants are using Microsoft products and their networks are connected to the public internets. And hospitals too.

    At least we do not have to worry about the super malware Flame. It has been terminated with extreme prejudice.

  6. please says:

    @The Raven

    This wasn’t an app being released on the App store that you’d patch later as bugs were ironed out. Its highly suspect to assume that it was just a mistake, its a glaring error considering the significance of the operation.

  7. please says:

    @The Raven: We aren’t dealing with an app released on the App store where its expected that bugs will be ironed out in the future. Human error is always possible but its highly suspect to assume that an error allowed such a significant operation to propagate.

  8. Arbusto says:

    Yep—had to be the Israeli’s because the US doesn’t make mistakes when ginning up an extremely time sensitive bit of code. The second bit of bullshit is the continued belief that Israeli intel has the same objective as the US, i.e. the enemy of my enemy is my friend. I think the Mossad run circles around our people from the NSA and CIA.

  9. Cheryl Rofer says:

    Nuclear reactors cannot “explode with the force of a small nuclear bomb.” If someone in Israel believes this and added stuff into Stuxnet to try to make it happen, then the US attempt to distract Israel with Stuxnet succeeded.

  10. orionATL says:

    there’s a fair amount of software “machinery”, a simple example is a compiler, used in creating a finished computer program. similarly, a programming language is required. not every line nor every logical clause is done only by human hand.

    i suppose it is at least conceivable, and quite frightening, that the “error” that caused stuxnet to run off the reservation might have been of this sort.

  11. Duncan Hare says:


    Bullshit. That code fragment just loads parameters. There is no evidence it “terminates” or “deletes” flame.

    The only function call in the highlighted fragment is “getFlameId”

  12. Duncan Hare says:


    Code is full of human errors, and no other form of error.

    There is a probability of error for each and every “if” statement.

  13. Rayne says:

    Earliest news reference I can find–without digging too deeply into technical sites–is (gods help me) WashingtonTimes on 10-OCT-2010. See this graf:

    He [German security research Ralph Langner] said that Stuxnet propagates itself across a corporate network, concealing its presence and looking for the special kind of ICS software it is programmed to attack. When it finds the software, a package made by the German industrial giant Siemens AG, it uploads blocks of encrypted code, effectively taking over the machinery the system is running.


    Technical news outlets say Stuxnet first appeared June 2010. That’s a little less than 4 months between first reported sighting and WT’s report, which means it’s been known almost since Stuxnet’s discovery that it spread–this was not a bug but a feature, one built for search-and-destroy missions. It didn’t go rogue, it was designed to run free.

    Should also point out the little blurb from a presentation at Langner’s company’s site: “not a Pentagon job”

    (Ugh, can’t believe I had to cite the MoonyTimes.)

  14. Rayne says:

    And now as to opinion on whether Israel deliberately released Stuxnet:

    I think we are looking at kabuki theater.

    Either the code was built to run and nobody wants to take the blame for the damage it may have caused or will cause — or — one of the contributing partners was stupid/negligent about the code’s capability and is quite unhappy about exposure on many fronts.

    Or maybe both.

    There are also at least two extremely large multinational corporations whose products have been named in the mix; they don’t want this mess to adversely impact them, either. Their interests and the interests of equally large or larger corporations that use their products are also at risk. Pressure from these corporations may factor into the noise generated.

    EDIT: Should also add another bit noted in that Langner presentation: Zero chance of Stuxnet working without a test facility

    Yeah. They knew this would move, kids.

  15. cregan says:

    No, DiFi was upset because Sanger didn’t reveal that an inconsequential CIA employee worked at the CIA. Thereby, not making it possible to have a big hullabaloo about it.

  16. Rayne says:

    Unf. Am totally enjoying the hell out of Langner’s January 2012 presentation on Stuxnet.

    So sexy to hear “frigging big [array]” in German accent. Gets a geek girl all hot under the tablet.

    EDIT: Do take note of comments at link above. A commenter believes target may have been North Korea, which I thought was a possibility when Stuxnet discovered and likely intended use first announced. Back in 2006-2008, when Stuxnet-and/or-Flame coding may originally have begun, US had many concerns about NK’s Yongbyon nuclear facility; NK was in bed with Syria on development (and Kim Jong Il’s leadership/government was highly questionable).

    In other words, the code for Stuxnet et al had multiple uses. I would put money on the code having remote access for targeting other sites on demand–including Yongbyon if things didn’t work out diplomatically. China may well have had a vested interest in Stuxnet’s success if this code was intended to target NK as necessary.

  17. The Raven says:

    I can only stress that errors in software development are near-certain, so much so that half of software development time is typically devoted to validating the software and correcting errors. I would want to see some evidence in the code before seriously arguing that the escape of the virus was deliberate, let alone ascribing it to one member of the development team. I’d be surprised if the developers even considered the possibility of escape, or were concerned with it. The developers had a program that did its job and didn’t seem to do other harm; that was probably enough for them.

    To the person who argues that “This wasn’t an app being released on the App store.” That means there is a larger chance of unnoticed bugs, not a lesser one! Software developers don’t plan on developing broken software; it is a result of unnoticed errors. The scope of testing of an app store app is probably far more extensive than was undertaken–perhaps even possible–for Stuxnet.

  18. Rayne says:

    @The Raven: You really need to watch Langner’s presentation. They had to have tested the crap out of this software, and on a test bed system. Coders knew exactly what this software was going to do; the question is whether ALL the coders, or ALL the authorizing entities knew as well.

    At about 39:35 into the presentation:

    “Don’t ask me what this means for the actuators, I don’t know, it’s just some detail that gives you an idea that these guys know the centrifuges better than the Iranians. I mean, they know everything.”

    Really effing impressive how much work went into this–this project had to have been launched much earlier in the Bush administration. I’m hoping to hear an estimate on man-hours for coding, testing somewhere in my digs soon.

  19. Rayne says:

    Frick. This Langner presentation is a motherlode. I will have to watch it several more times, it’s so chock full of goodies.

    “Vendor” is implicated — and I think this means Siemens, unless it’s a software vendor for Siemens equipment — because the original system relies on read/write instead of read-only. This is a deliberate, fundamental “design flaw” which has not been fixed–by choice.

    Pointedly says this looks like CIA and not DOD.

    EDIT: OMG, he’s actually pointing out the flaws and talking into the camera during Q&A about the improvements for 2.0. Jeebus. I think this shoots the excuse upthread of coding errors in the ass with a double-aught buckshot load at point blank range.

    EDIT2: In reply to Q&A about software QA, he says the QA on Stuxnet far surpassed the original manufacturer’s QA. Um, again, another double-aught blast to the ass re: coding error. Granted, much of the research is on PLC code, but the coding must have been comparable across the project since downside would be enormous if this code was intended for production targets other than Natanz (like Bushehr or Yongbyon).

    EDIT3: Big take-aways from presentation–

    Big Lesson number 1: “Sophisticated attackers go after your engineering systems.” Has not been addressed in many industries. After Stuxnet, your engineering systems should be security priority number one.

    Big Lesson number 2: “…sophisticated attacker does not need online access. so every time you still hear, ‘But our facility is not connected to the internet,’ you know that this person has no clue what they are talking about.” In other words, this may have spread via network, but this malware came in on two feet and could come in on two feet anywhere else.

  20. The Raven says:

    Rayne, the primary goal of the Stuxnet QA effort was not to keep Stuxnet from spreading. Do you remember the failure to launch of the very first space shuttle mission computer problems? I remember the report on that. That code was heavily reviewed and tested to a fare-the-well. And still, it failed on the pad. So coding errors are still possible with the best engineering effort.

    (You can read a draft of that report here. A similar report ended up in CACM, I think, but if it’s there it’s behind a paywall.)

  21. Rayne says:

    @The Raven: As I noted in EDIT2 of my preceding comment:

    “…Granted, much of the research is on PLC code, but the coding must have been comparable across the project since downside would be enormous if this code was intended for production targets other than Natanz (like Bushehr or Yongbyon).”

    And in EDIT3:

    “…this may have spread via network, but this malware came in on two feet and could come in on two feet anywhere else.”

    Watch the Langner presentation video. Whomever/whatever wrote the package did so with obsessive attention to detail.

    Ugh. Can’t even begin to use the Shuttle for comparison. By the time of the catastrophic failure, there were far too many corners being cut all over the place. It’s a wonder so many missions even made it off the ground in the 1980s.

  22. emptywheel says:

    @Rayne: Sanger’s original story said that we went to Siemens and said we had cyber concerns (the same ones that we keep talking about for nuke sites, backbones of various types, and the like.

    Meaning every time we go to a vendor with that claim, we may well be doing research.

  23. joanneleon says:

    Is there any chance that Stuxnet could get into one of our nuclear facilities and cause the same kind of damage?

  24. The Raven says:

    Rayne, that shuttle software failure was at the first launch. Corners were not cut then.

    If I can find an hour or so, I’ll listen to the whole video, but the bits I’ve listened refer to the centrifuges, rather than the escape of the virus. I do note that at 51 minutes or so, Langner discusses a few errors on the part of the developers: there were things they did not care about. At about 57:40, there’s a discussion of the developers QA effort. At 59:20, he comments “There were parts were of the attack that were thoroughly tested, very well designed […] But there are other parts of the attack that were written in a rush, and nobody cared to really evaluate this […]”

    Which supports what I keep pointing out: the best efforts go towards the project goals. Keeping it from escaping wasn’t one of them.

  25. emptywheel says:

    @Rayne: One thing I’m cognizant of is that Richard Clarke acts like he knows what he’s talking about. He left in 2003.

    That seems early. Particularly given that that’s when Libby was busy outing Plame, presumably to fuck up our efforts to thwart Iran’s program.

  26. joanneleon says:

    The change that was made could have been as simple as commenting out (disabling) a section of code that checked to make sure what kind of environment the code was running in.

    When testing code, software developers often comment out certain sections because it’s hard to set up a test environment that is exactly like the real one.

    So while it might have been an intentional error, I can easily see how it could have been a mistake too.

  27. joanneleon says:

    @please: Well it was an incredibly sensitive operation, yes. But the project was also expedited, or at least that was part of the story, that our government fast tracked it. I doubt that there were many developers working on this because of the high level of secrecy. And if it was a super critical project and the developers were being pushed, the chances of mistakes being made get higher and higher.

  28. Rayne says:

    @The Raven:

    — Not going any further with the shuttle stuff. Period.

    — You insist there’s errors–bugs. Prove they aren’t features. Which begs the question: do you know with 100% certainty that Stuxnet was intended for Natanz alone? do you actually know what the project goal was/goals were?

    Langer says between 59:20 and 1:00:00 there was rushed code, but he doesn’t say it’s buggy or error-filled. He suggests this is an indicator the code wasn’t critical to the specific mission as well as an indicator that multiple teams were at work.

  29. Rayne says:

    @emptywheel: Yet as of Jan 2012, no patch to modify read/write to read-only had been issued by Siemens for what is a very simple, straightforward design flaw, simpler than the bulk of Microsoft vulnerabilities.

    “Oh look, we haz concerns!” and no follow-up effort made to address the concerns by either vendors or gov’t? Fishy.

    Even if some portion of the US gov’t expressed concerns, there was no effort to plug the identified holes. Some of these attempts to research are just window dressing–they provide plausible deniability.

    As Langner says in his presentation, design flaws are how the pros do it–not vulnerabilities. Given this, pros would likely insist the flaws remain until other flaws can be developed.

  30. orionATL says:

    so the u.s. has now been shown to have been crying wolf about cyberwarfare while conducting it – against a sovereign non-belligerent.

    maybe it’s the potential nasty publicity that got senator frankenstein’s panties in a twist.

    we just don’t play by the rule of law, or any other rule, unless it’s a rule we made up.

  31. joanneleon says:

    @Rayne: Rushed code still indicates a big risk. And everyone in software development knows that in the transition from a test environment to a production environment, when it goes live, the risk of things going wrong is really high.

  32. Rayne says:


    Stuxnet attacked PLCs — in this case, Siemens-made PLCs.

    PLCs are fairly simple computers used to operate all kinds of equipment used in power control to pharmaceutical manufacturing and a host of functions in-between. These devices generally have network ports and often communicate with a master control device. See for an explanation.

    Could the exact same code targeting Natanz PLCs damage nuclear energy facilities? Yes and no; it really depends on what Stuxnet’s code tells an infected device to do. Likely Stuxnet would detect it was not in a targeted PLC and render itself inert. But if infected, there’s a risk that a Flame-type app could detect Stuxnet and call for other malicious code to insert itself into the original Stuxnet. Hackers apart from a nation-state could make use of Stuxnet on an infected system to launch an attack; I could see this happening rather easily in chemical plants’ PLC, for example.

    Possible? yes. Probable? somewhat iffy, but better odds than playing the lottery for a multi-million dollar prize.

  33. Rayne says:

    @joanneleon: Again, I’ll point to Langner’s presentation. There’s no indication that the rushed code portions affected the project payload at all.

    Still comes down to knowing exactly what the scope of the project — its mission(s) and goal(s) — might have been.

    I’ll point to my reply to emptywheel above; if dispersion of Stuxnet was NOT a part of mission/goal, then why hasn’t the gov’t demanded the Siemens design flaw get patched ASAP? The actions tell a story the missing words don’t.

    EDIT: In re: testing > production — should point out again that Langner said Stuxnet coders knew more about the systems than the Iranians did, and the code was definitely trialed on a test bed.

  34. spanishinquisition says:

    Actually the Israelis are claiming it was their cyberwar and Obama is only claiming credit to win an election:
    “The Israeli officials actually told me a different version. They said that it was Israeli intelligence that began, a few years earlier, a cyberspace campaign to damage and slow down Iran’s nuclear intentions. And only later they managed to convince the USA to consider a joint operation — which, at the time, was unheard of.”

    This is the problem with leaks as they aren’t transparency and can be self-serving as well as leave things out. Maybe it was the US’s project where we let the Israeli’s join in and maybe it was the other way around, but we don’t know. Relying upon the one leaking to give you the true and correct story is a mistake.

  35. spanishinquisition says:

    @Arbusto: “Yep—had to be the Israeli’s because the US doesn’t make mistakes when ginning up an extremely time sensitive bit of code”

    Yes, I take issue that – just because Joe Biden allegedly says something, it doesn’t mean it is true. First of all we don’t even know if Joe Biden actually said it and even if he did say it, we don’t know if he’s right. To me the quote from Joe seems quite convenient in establishing the “Obama is responsible for all the good things and none of the bad things” narrative.

  36. The Raven says:

    Siemens is acting like most software vendors when a problem is found. Still, it is possible they are deliberately leaving that equipment vulnerable. The Siemens WinCC system is a general purpose industrial process control system, and a future version of Stuxnet could target other industrial processes.

    From other sources, Stuxnet appears to have been targeted at Iran. The code can, of course, be reused.

    Rayne, are you seriously claiming that Stuxnet has no bugs?

  37. Rayne says:

    @The Raven:

    I didn’t say there were no bugs. Go back and point to where I said there were no bugs. Langner (and numerous others) went through the code, line by line; I’m very skeptical about the existence of bugs given the detailed review of the code to date. I’m convinced the dispersion capability was a feature because of gov’t response to its spread.

    (Jeebus, if it wasn’t, the DHS should be hair-on-fire inspecting every at-risk system and demanding a purge of the offending code as well as a permanent patch. But I don’t smell burnt hair, do you?)

    You believe there’s bugs? then point them out. Seriously. I’ll bet Langner and Kaspersky will call you, as will a number of journalists.

  38. Kathleen says:

    Sanger sure tries to distract from the who leaked issue.

    Sanger “I have no idea who that person was, it was not a person it was a technological error” What?

    NPR all things considered reported tonight that Holder and team going after…leaker

    So interesting that Axelrod was making it into those Tuesday meetings

  39. MadDog says:

    @The Raven: Let’s not just limit this to vulnerabilities in “civilian engineering systems”.

    Consider places like Plantex where the US makes its nuclear weaponry. I wouldn’t be surprised if they have a good many machining toolsets run by systems not unlike the Siemen’s controllers.

  40. Rayne says:

    Don’t know if you caught the Langner presentation, but at about 43:50 he points out the risk is not only industrial controls security at risk, but SAFETY systems.

    Made me think automatically of a Fortune 100 company’s manufacturing plant gate access control system that was run on a PLC, for which I used to have monitoring/IT repair responsibility. ~shudder~ It’s still a form of security, but I can see where this was a safety system, too, for both workers and the production system.

    What other safety systems are at risk?

    And jeepers, why isn’t DHS panicked?

  41. MadDog says:

    @Rayne: I’ve been following your commentary on this post with great interest. As a now reformed techie of 30 years, I’ll gladly leave the code review to you. *g*

    As to DHS not panicking, they still evidently think of themselves as the Fire Department. They evidently think their job is to put out the fires after they happen, not before.

  42. The Raven says:

    Rayne, you are arguing like a wingnut who can’t concede a point. And, damnit, it’s not a big point.

    I can’t prove that the escape of Stuxnet was not deliberate; you can’t prove that it was. It’s just–when software does something unexpected, the thing to bet on is human error, not design. Edsger Dijkstra’s maxim, “Testing shows the presence, but not the absence of bugs,” haunts the software engineering profession; no amount of testing is proof that the software is correct.

    Most people who don’t know software underestimate how easy it is to leave bugs in programs. I have no trouble believing that Joe Biden believes this was deliberate, but I have trouble believing it. Honestly, I can’t imagine any intelligence or covert ops agency deliberately letting such a valuable tool of sabotage loose. Half its value is its secrecy. It’s more of a “lone crazy” sort of thing: someone who just wants to do harm, and doesn’t care who gets hurt.

    I suppose there could be a lone crazy at that. Ummmm…

    Anyhow, peace.

  43. orionATL says:


    this is very interesting, specifically, the slow-walk toward fixes.

    wonder if they were intending to apply this elsewhere, for example, n. korea?

  44. The Raven says:

    @Rayne: “And jeepers, why isn’t DHS panicked?”

    I wasn’t going to respond further, but I find I have something to say on the subject after all. Who knows? You may even agree.

    Software professionals have been screaming about this for decades. Peter G. Neumann has been editing the The RISKS Digest since 1985. You probably know Bruce Schneier’s work on security as well. No-one listens…except for the banksters, who paid to have their Y2K problems fixed. And every now and again, I meet someone who thinks there was no Y2K problem, that it was all a scam. Rather like climate change denial, come to think of it. Most people don’t like to think about these things, and they don’t.

  45. MadDog says:

    @The Raven: Ahhh my friend, I think I can put this in a perspective that might benefit both your and Rayne’s positions.

    I am less concerned with whether this was an inadvertent coding error or an intentional malicious act of some programmer.

    To me, and again, I have a 30+ year background in computer stuff, the bigger concern is with the decision-making of our National Security State leadership.

    You see, these folks are either clueless non-techie users (I rather doubt this is the case), or they are malevolent “ends justify the means” ideologues.

    The reason I say this is that anyone who understands computers know that computer code can live forever. Code, like Stuxnet, that is used to sabotage an adversary’s systems, can be turned around and used to do the very same thing to ours.

    I can’t help but think that these fools made a deliberate “ends justify the means” decision to employ Stuxnet against the Iranian nuclear program with the full understanding that such potentially devastating software could easily be re-used to target a zillion relatively unprotected systems in the West.

    I have to believe they knew this, and nevertheless, deliberately chose to ignore any possible, and likely, blowback.

    Whether the escape of Stuxnet or Flame was inadvertent or intentional, the bigger problem is simply that it was foreordained. And the decision-makers knew this yet still chose to proceed.

    This then to me is the crux of the issue. We have people who run our government who are completely amoral and without a scintilla of ethical restraint who would do anything to anyone without blinking an eye to further their objectives.

  46. The Raven says:

    @MadDog: That’s a version of the “lone crazy” theory, only of course it isn’t a lone crazy; it’s a group of them. I can’t prove you’re wrong; it’s best not to build doomsday weapons–this is common sense. Unfortunately, common sense hasn’t prevented us from doing that. Personally, I am more inclined to the “giant f*ck-up” theory, which pretty well explains a great deal of history.

  47. MadDog says:

    @The Raven: I’m inclined to agree with you to a certain extent. There is truth in the theory that “giant fookups” have indeed played a big role in human history.

    But there is also truth in the fact that deliberate human malevolency too has had an outsized role.

    Given the genesis of Stuxnet’s creation during the Bush/Cheney regime, and Iran was high up on Cheney’s hit list, I have zero doubt that Darth personally gave the thumbs up on Stuxnet.

  48. Rayne says:


    Yeah…doesn’t the lack of mea culpas and rousing of co-petitive or licensed vendor products by Siemens look an awful lot like cooperation? Hmm. But then I think the frequency of so-called vulnerabilities on the part of Microsoft products–including here in the Stuxnet scenario–look an awful lot like opportunity.

    In re: slow walk — Doesn’t actually look like so much as a shoelace has been tied after a year-plus.

    EDIT: Hey, don’t know if you saw this about Siemens… — interesting.

  49. Rayne says:


    This bit:

    This then to me is the crux of the issue. We have people who run our government who are completely amoral and without a scintilla of ethical restraint who would do anything to anyone without blinking an eye to further their objectives.

    Sounds like you’re talking about Deadeye Cheney and his closest peeps. LOL

  50. MadDog says:

    @Rayne: I am, and I did. *g*

    I’ve just been re-reading the Vanity Fair piece on Stuxnet by Michael Gross from April last year. Amazing how much was pieced together back then before the NYT’s David Sanger latest caused it to really go critical mass.

  51. The Raven says:

    @MadDog: “Dick, you’re looking better than ever. I mean that. I think you’re going to live another 254 years, Mr. Vice President.” –Link.

    You’ve got a point there. I suspect that Cheney didn’t understand the consequences but, still. I suppose we must be grateful he never had the opportunity to deploy a biological weapon.

    @emptywheel: Thanks. Clarke also says the self-kill failed by accident.

    The whole issue of infrastructure software security and reliability is a huge and important one, and I agree with Clarke: it is not being addressed.

  52. Rayne says:


    Oh, all the techie mags as well as the researchers themselves have been completely open about their findings. VF was late to the game, but did a nice job with that piece you linked from Kaspersky POV. I liked this bit in particular, a quote from a Russian Kaspersky engineer:

    “Eugene, you don’t believe, something very frightening, frightening, frightening bad.”

    Um…yeah…if I were in DHS or another US gov’t function, I’d just ignore this as hyperbole and slow-walk response.


  53. Ken_Muldrew says:

    @Rayne: “Could the exact same code targeting Natanz PLCs damage nuclear energy facilities?”

    You have to remember that Stuxnet was targeted to a highly specific bit of engineering. The way it worked was to slow down the centrifuge motors almost to a dead stop and then spin them up to a very high speed. This was either to cycle the centrifuges through the critical speed where resonance might break them or to poison the stream (gas centrifuge purification is a cascade process without check valves–if you stop one centrifuge, then you quickly get unenriched uranium mixing with the enriched stream and you go back to square one (of a several months-long process of enrichment)). Or both. So obviously this can’t have any effect on Bushehr or Yongbyon (they don’t do gaseous enrichment of uranium). Nor can the code (as written) really do anything to any other facility that isn’t almost a clone of Natanz. Most industrial motors won’t take slowing down to 2 Hz without blowing their overloads. Then it is a simple matter to find out that the PLC is telling the motor to slow down to this speed (industrial plants are full of workers, including electricians, who diagnose problems like this constantly). The danger here is not Stuxnet in the wild, it is Stuxnet in the hands of malevolent forces who wish to target another *specific* industrial facility. It provides a template for hijacking a PLC while getting the SCADA to tell the operator that everything is just fine. But it is still hard to do (even with the template), and targeting an industrial process without having human operators notice the problem requires an exceptional understanding of the engineering of the process. This can’t really be done from a bedroom in Latvia, no matter how brilliant the codehead is.

    If DHS was really about security of the Fatherland, then they would be very concerned about such threats. But there are detention facilities to be built, and so many other priorities. I’m sure this is on their list, but perhaps not at the top.

  54. orionATL says:

    well, i’m getting the feeling from reading these many interesting comments that programs like stuxnet and flame are going to have to be treated like biological and chemical warfare.

    ask yourself why any of us would be more afraid of biological warfare, e.g., a smallpox virus weapon, than an atomic weapon?

    more afraid of “nerve gas” than atomic weaponry.

    more afraid of a stuxnet than atomic weaponry?

    the atomic we can see, measure, and control; the others we cannot.

    imagine a stuxnet, accidentally or deliberately, influencing the flood gates that protect the netherlands from the sea,

    the supply of water thru aquaducts to nycity or la,

    the u.s. telecommunications grid or electrical grid?

    a nation’s air traffic control radar?

    then ask yourself what immense “exogenous” harm was done to future international efforts to co-operate to control threats to all humans when the u.s. ignored the international convention on torture.

  55. Ken_Muldrew says:

    @The Raven: “You’ve got a point there. I suspect that Cheney didn’t understand the consequences but, still. I suppose we must be grateful he never had the opportunity to deploy a biological weapon.”

    Are you giving Cheney a pass on Amerithax?

  56. emptywheel says:

    @The Raven: But you’re ASSUMING it was “unexpected.”

    You have no evidence that it was. The point of this post is that the report–and the only new news in Sanger (and thus the news that everyone’s squawking about) suggests it was expected, at least for the Israelis. I’ve provided two potential explanations for why they would do that.

    We don’t know which, if either, is true, but you’re engaged in making an assumption that is contrary to what the reporting that has set DC out off on a warpath says. It may all be kabuki, sure. But the report does, in fact, suggest Israel expected this to escape.

  57. emptywheel says:

    @The Raven: Ah, but I can assure you they knew the Y2K problem was a problem when they coded it. So that was not unexpected in the last.

  58. Rayne says:

    Believe I said “Yes and no” to question re: Stuxnet hurting nuclear energy facilities. Really depends on what the code said re: detection of location and the subsequent response. Based on what I saw in Langner’s presentation, Stuxnet code checked to see if it was installed in a PLC meeting criteria matching that of Natanz; if it matched, then it proceeded to next step. If no match, it halted. Whether there were multiple checks, I couldn’t tell.

    Cannot tell from what I’ve seen so far is whether alternative responses or secondary checks in the system launched different actions; it’s possible that Stuxnet has damaged systems by way of these alt/Plan B actions and we’ve simply not been told. (There was a theory that Stuxnet messed up the power system on India’s INSAT-4B satellite launched the first week of July 2010; there are conflicting stories and rebuttals. Who knows for sure?)

    I disagree with you–believe that Stuxnet remains a danger in the wild; the problem is that the code is now deposited, and with a Flame-like app, could be modified to fit a specific situation. Would it be successful–like blowing up a nuclear energy facility? Maybe not, but perhaps simply sticking a software wrench in the works is enough.

    And now that the nature of the PLCs targeted is clear, it may actually be possible to make use of this temporarily-inert Stuxnet infection–not from a bedroom in Latvia, but a campus in China, a la project Aurora.

    But jeepers, this stuff is so arcane and abstruse. No wonder DHS reacts as if no threat exists. /s

  59. Ken_Muldrew says:


    Stuxnet won’t do anything to a nuclear energy facility. It requires a particular Siemens PLC, at least 33 VFDs made by either Vacon or Fararo Paya, and those VFDs have to be running at very high speeds (around 1000 Hz). There is only one place where Stuxnet will cause mischief.

    I’m not arguing against the threat of cyber warfare; we are obviously living in that era. But Stuxnet itself isn’t a threat, it’s more of a recipe for a different threat.

    You might also be interested to know that Siemens PLCs are very popular in Europe but not at all in North America. DHS may feel that anything that interferes with oil ever being sold for Euros may be a net benefit even if it causes great harm to people who don’t sell oil.

  60. Rayne says:


    I’m really struggling with the idea that the US greenlighted/underwrote/ordered a cyberwarfare project making use of (4) zero-day exploits in Microsoft products, and failed to understand that this app would spread. It’s a Microsoft product, loaded on a PC; how many of these devices are networked in some way, and/or have ports for storage devices? Short of being a workstation node on a closed system with no ports, the very nature of a Microsoft-loaded PC means software goes on/comes off the system. It’s infectible and it’s infectious; it must be in order to receive proprietary software. This had to be an intentional deliverable.

    Even the argument about disclosure re: threat to Bushehr is lame; there was press about Stuxnet at that site in early September 2010, only weeks after it was publicly announced the malware was on the loose.

    So what’s all the whining and puling really about? Was it that the software was intended to spread and target all the NK content resulting from dismantling part of their nuclear program, and dispersed to Iran, Pakistan, Taiwan, Republic of Korea, and a few other facilities–possibly intended for the al Kibar facility in Syria as well? And this action being acceptable only as a program labeled covert, must therefore have defenders enforcing the labeling as covert–else this is an overt act of war against multiple sovereign nations, including some considered friendly?

  61. Rayne says:

    Write once, use many times. Being a techie you’re familiar with this concept. As I understand it, Flame-like app only needs to deploy a find/swap function to replace specs within the Stuxnet code to tell it to check for a different model and run if it finds certain other parameters instead. The underlying architecture of Stuxnet remains in place, deposited and inert, until this find/swap and then saved by Siemens’ read/write function. Ostensibly, any Siemens’ PLC could be affected.

    I’m familiar with the PLC/PAC market. I’ve got background in manufacturing from automotive to chemical, where PLCs are found all over the place. Allen Bradley (div. of Rockwell Automation) is one big name commonly found here as well as ABB, but many manufacturers rely on Siemens, too–like the chemical company I worked for, which had branches across Europe and sister/clone plants in the US.

    Once written for Siemens, it wouldn’t take much for coders to write similar viral malware for A-B, ABB or any other brand device. The core is already written; it just needs customization for a specific situation. I don’t think this is a recipe–this is more like skipping the blueberries in muffins and adding cranberries instead. The hard stuff’s been done; it’s still a muffin payload.

  62. Rayne says:

    Hey EW — got a good old-fashioned timeline for you that might be of interest.

    October 9, 2006 — North Korea tests a nuclear device using plutonium technology; an underground blast is detected.

    March 17, 2007 — In international talks, North Korea says it is “preparing to shut down its main nuclear facility.”

    Spring 2007 — Suspicious-looking building in Syria noted, appears to be a “twin” of Yongbyon nuclear facility in North Korea used for production of weapons-grade plutonium, believed to be result of 10-year cooperative program between Syria and NK.

    Late Spring/early June 2007 — Series of meetings, including NSC meeting in June 2007, tackle issue of Syrian facility al Kibar and flesh out a “no core/no war” policy. “No core” meant prevention of nuclear reactors from going online to production; “no war” represented belief that destruction of nuclear capabilities would reduce the risk of war across Mid-East.

    August 1, 2007 — President Bush issues Executive Order 13441, “Blocking Property of Persons Undermining the Sovereignty of Lebanon or its Democratic Processes and Institutions,” issued ostensibly to address Syrian interference in Lebanon’s politics.

    August 31, 2007 — Software installed by “malware” agent named Duqu compiled on a device running Windows O/S on this date; Duqu is believed to be one of four “cousin” cyberweapons apps built upon the same platform–including Stuxnet.

    September 6, 2007 — Alleged nuclear facility al Kibar in Syria bombed by Israel. Likely legal basis for attack is UN Charter Chapter VII, Article 51.

    April 24, 2008 — During DNI briefing on “Syria’s Covert Nuclear Reactor and North Korea’s Involvement,” official says they had evidence from 2006 of cargo being transferred from North Korea to what is most likely the al Kibar Syrian facility (begun in 2001 and completed in summer of 2007). Officials indicate the deal between NK-Syria was driven by NK’s desire for cash.

    Okay, seriously, I’m done. I’m going to go crawl back under my politics/foreign policy/natsec-free rock.

  63. The Raven says:

    @emptywheel: “Ah, but I can assure you they knew the Y2K problem was a problem when they coded it. So that was not unexpected in the last.”

    Actually, no. The coders knew; the bosses refused to believe. It was hard to persuade the managers to spend money to prevent a problem 20-30 years in the future. And almost everyone thought that the software would be replaced long before Y2K; there wasn’t, when that code was written, much experience that told people how long code and, especially, database designs can persist.

    I think you are giving too much weight to what Joe Biden says and too little to the realities of software engineering. It is hard to overestimate the poisonous combination of managerial ignorance and the difficulty of software developer as a source for software failures.

    In any event, I am so out of this discussion.

  64. Rayne says:

    Adder to @Rayne comment above —

    Accidentally lost a bit I meant to add to that timeline entry — Iran and North Korea both received A.Q. Khan’s nuclear technology, which may have included elements of centrifuge design (see North Korea and Iran). Khan more recently denies such technology transfer. It’s possible that NK’s dismantled components may have been transferred to Iran for cash or energy resources; NK and Iran have continued to collaborate on missile technology (see CSMonitor).

    It occurred to me that perhaps NK’s program offered an opportunity for development of a highly detailed testbed. If there has been access to the NK program for some time, perhaps long before the 2005 6-party talks with NK, AND assuming the program’s technology is substantially similar to that transferred to Iran (if not the same technology), developers would have had plentiful resources for accurate code development.

    Perhaps the squealing about leaks is in part to stem any more speculation before both Syria and NK are mentioned WRT to nuke technology transfers, and before NK–now led by relatively unknown/untried Kim Jong Un and a twitchy military–is cited in any way as relevant to the cyber-attack on Iran?

    Surely just a coincidence, too, that a high concentration of countries benefiting from AQ Khan’s technology expertise–Libya, Syria, Iran–have been subject to revolutionary forces. And including Pakistan, the same have received intense scrutiny from US military.

    We’re all so preoccupied watching the kabuki theater that we’ll probably miss the real show.

  65. Kathleen says:

    “This then to me is the crux of the issue. We have people who run our government who are completely amoral and without a scintilla of ethical restraint who would do anything to anyone without blinking an eye to further their objectives”

    Agree. Iraq..dead..injured…displaced. Not a whisper anymore. Not much of a whisper when U.S. forces were there. Moved to the next PNAC targets

  66. orionATL says:

    much appreciation to

    rayne, raven, and ken muldrew

    for one of the most informative and interesting technical discussions we have had here in a long time.

    each contributed new information for my understanding that i consider well worth storing away.

  67. Kathleen says:

    Flynt and Hillary Mann Leverett
    In his Op Ed, Ambassador Hua states his bottom line up front, with commendable clarity: “It is unrealistic for the US to expect China to act in a way that is harmful to its interests and against its diplomatic principles.” After succinctly reviewing why, contrary to Western stereotypes, “Iran is neither rogue nor fundamentalist,” he gets to the core of Sino-American disagreements over dealing with the Islamic Republic:
    “The US is not willing to let its dominance in the Middle East be challenged by a regional power like Iran; so the hostility and antagonism between the two countries has grown. In contrast, Sino-Iranian relations are one of the oldest bilateral relations in the world and valued by both sides…The foundations for their friendship are that China has never intervened in Iran’s domestic affairs and their economies are complementary, offering huge potential for cooperation.

    The US hopes to enlist China’s help in dealing with Iran. But that’s impossible because China will never join the zero-sum game between the US and Iran…The disagreement between the US and China has become especially serious with the US imposing sanctions to restrict Iran’s oil exports as China is a big importer of Iranian oil. But maintaining relations with Iran is a matter concerning China’s vital interests and China’s fundamental diplomatic principles. The US should respect China’s friendly relations with Iran, as well as its interests.”

  68. Kathleen says:

    Sanger ” I have no idea who that person was. It wasn’t a person it was a technological error” How quickly can an individual contradict themselves?

  69. Gitcheegumee says:

    “Whoever it was who programmed this thing and made a mistake in it in 2010 so that the bug made it out of the Natanz nuclear plant, got replicated around the world so the entire world could go see this code and figure out ……”

    Just an observation ,and merely for the record-in reference to timelines-summer 2010 was when Gareth Williams was found dead ,locked inside his own duffel bag in his London flat.

    Jus’ sayin’.

  70. emptywheel says:

    @Rayne: These are the folks that let their DIA computers in Iraq remain open to removable media in the middle of a war, as well as their drone platforms.

  71. emptywheel says:

    @The Raven: Bullshit.

    I’m the daughter of someone who developed computers for a very major manufacturer until 1984. I can assure you they knew. I can remember starting a computer in 1981 and being told about Y2K.

    But I appreciate that you have never explained why they’re doing a leak investigation into something that you claim is bogus and your continued assumption I know nothing about software. b

  72. Rayne says:

    Several things came to mind overnight–this is nearly a post.

    1) One key word could have heightened the leak-freak-out: accelerating.

    Why was the White House so concerned about timing? Tapper even picks up on this, might have emphasized the word “accelerating” when he introduces Sanger in the video clip. Is there a broader awareness of a timing sensitivity the public doesn’t know, and intended cyber-targets should not share this awareness?

    “Mr. Obama decided to accelerate the attacks — begun in the Bush administration and code-named Olympic Games — even after an element of the program accidentally became public in the summer of 2010 because of a programming error that allowed it to escape Iran’s Natanz plant and sent it around the world on the Internet. […] If Olympic Games failed, he told aides, there would be no time for sanctions and diplomacy with Iran to work….” — David Sanger, NYT 01-Jun-2012

    We haven’t seen much feedback from the White House with regard to urgency. We’ve only seen some finger-pointing and non-denial denials that narrowly preserve covert status of the cyber-op–what we have heard could obscure heightened concern about timing.

    See comment regarding development of a “no core/no war” policy. Everyone in the media assumes that Natanz was the only core, and that preventing war in the Mid-East could be achieved by terminating the core’s development. But what if this policy referred to ANY core, including a similar core in North Korea (possibly used to help Iran’s program development), and war in the Korean peninsular region?

    Would this change in scope collapse estimated timing to Iran’s program completion?

    2) Sanger calls the movement of Stuxnet a “mistake” or a “technological error.”

    It’s been argued up-thread that this software wasn’t/was intended to spread. But Sanger and others in the media haven’t pointed out a dissenting opinion by a security expert, Ralph Langner, who is arguably one of the most knowledgeable persons on Stuxnet.

    Langner’s 02-JUN-2012 response to Sanger’s 01-JUN-2012 NYT article:

    …One technical detail that makes little sense is the theory that Stuxnet broke out of Natanz rather than into due to a software bug introduced by the Isrealis; this sounds like an attempt (of one of the sources) to put the blame for a non-anticipated side effect of a design feature on somebody else.

    Ignoring the remark about “non-anticipated side effect,” Langner’s opinion diverges sharply from Sanger’s with regard to the direction Stuxnet moved. Did Stuxnet break OUT of Natanz, or did it break IN? Which begs the question, given the amount of dispersion, was Stuxnet also supposed to break into other facilities as well? (Think about the possible originating source of Iranian program equipment at this point.) Was a possible break IN conducted via network, or by human asset(s) at risk if the infection’s direction of movement was disclosed?

    3) Let’s go a bit further in regards to speed and direction.

    If Stuxnet was moving INTO Natanz, where did it move from? Did it come directly from the test bed, the location of which most definitely should remain classified and highly protected.

    As for speed: would the origin of Stuxnet if released from the testbed tell the targets anything about the actual fears of the White House and any other nation-states involved in this program? Would it tell the targets how much of a hurry the White House/nation-state actors were in to get this job done? Would it signal the events that tripped a decision for acceleration? Would it pinpoint another target besides the patently obvious one, Iran?

    For the last 6-8 years, conjecture about the timing of Iran’s nuclear program completion has been shifting. Depending on whether a neocon or a more liberal official were asked, the timing could be 18 months to 10 years. It’s generally assumed that timing was based on Iran keeping all development in house.

    BUT…what if North Korea was doing all the heavy lifting? the reclusive, hermitic nation with similar aims, a need for cash, and very similar nuclear technology might be an optimum partner in development. What if the nuclear blast in Oct 2009 and subsequent missile tests were really for both North Korea’s and Iran’s programs? Would that suggest Iran’s program is nascent, parallel with North Korea’s program?

    Is that why the hurry, and all the kabuki to redirect attention from near-panic?

    4) The testbed must be secure at all costs if speed is of the essence.

    If another Stuxnet variant must be coded, tested, and launched in a very tight time-frame to defer possible traditional military reaction, the testbed must remain intact, accessible. The location of the testbed might also disclose the nature/identity of assets/resources/sponsors, too. All the conjecture talks about everything but Stuxnet’s testing, as if to avoid it. Was there something in Sanger’s writings to date which may have gotten too close to the testing?

    5) Langner wrote in NYT on 04-JUN-2012: “Almost two years ago, I wrote that Iran seemed to be begging for a cyberattack.”

    If Iran looked like a target, was there another entity trying hard not to look like a target? (read: reclusive, hermitic country) The discovery of Stuxnet infection at Natanz happened within a rather narrow timeframe; was it because the obvious target expected it, allowing a less-than-obvious target to continue its work uninterrupted?

    In closing I should point out two little stray details.

    — Sanger wrote an interesting article about US efforts at containment of North Korea in August 2009, only 8 weeks before North Korea’s underground bomb test–note the boat and its probable intent;

    — North Korea’s early April 2012 missile test failed. Any chance it was a cousin of Stuxnet at work? Iranians were present at this test, by the way. Ahem.

  73. Rayne says:


    Yeah. It’s really hard to assign responsibility for sophisticated cyber-warfare tools to any US intelligence program given the preponderance of evidence for US non-intelligence.


  74. emptywheel says:

    @Rayne: You’re going to enjoy this post:

    The Israelis want credit. And they say it started earlier than we know.

    The Israeli officials actually told me a different version. They said that it was Israeli intelligence that began, a few years earlier, a cyberspace campaign to damage and slow down Iran’s nuclear intentions. And only later they managed to convince the USA to consider a joint operation — which, at the time, was unheard of. Even friendly nations are hesitant to share their technological and intelligence resources against a common enemy.

Comments are closed.