Stuxnet: The Curious Incident of the Second Certificate

“Is there any point to which you would wish to draw my attention?”

“To the curious incident of the dog in the night-time.”

“The dog did nothing in the night-time.”

“That was the curious incident,” remarked Sherlock Holmes.

Arthur Conan Doyle (Silver Blaze)

[From ew: William Ockham, who knows a whole lot more about coding than I, shared some interesting thoughts with me about the Stuxnet virus. I asked him to share those thoughts it into a post. Thanks to him for doing so!]

The key to unraveling the mystery of Stuxnet is understanding the meaning of a seemingly purposeless act by the attackers behind the malware. Stuxnet was first reported on June 17, 2010 by VirusBlokAda, an anti-virus company in Belarus. On June 24, VirusBlokAda noticed that two of the Stuxnet components, Windows drivers named MrxCls.sys and MrxNet.sys, were signed using the digital signature from a certificate issued to Realtek Semiconductor. VirusBlokAda immediately notified Realtek and on July 16, VeriSign revoked the Realtek certificate. The very next day, a new Stuxnet driver named jmidebs.sys appeared, but this one was signed with a certificate from JMicron Technology. This new Stuxnet driver had been compiled on July 14. On July 22, five days after the new driver was first reported, VeriSign revoked the JMicron certificate.

The question I want to explore is why the attackers rolled out a new version of their driver signed with the second certificate. This is a key question because this is the one action that we know the attackers took deliberately after the malware became public. It’s an action that they took at a time when there was a lot of information asymmetry in their favor. They knew exactly what they were up to and the rest of us had no clue. They knew that Stuxnet had been in the wild for more than a year, that it had already achieved its primary goal, and that it wasn’t a direct threat to any of the computers it was infecting in July 2010. Rolling out the new driver incurred a substantial cost, and not just in monetary terms. Taking this action gave away a lot of information. Understanding why they released a driver signed with a second certificate will help explain a lot of other curious things in the Stuxnet saga.

It’s easy to see why they signed their drivers the first time. Code signing is designed to prove that a piece of software comes from a known entity (using public key infrastructure) and that the software hasn’t been altered. A software developer obtains a digital certificate from a “trusted authority”. When the software is compiled, the certificate containing the developer’s unique private key is used to “sign” the code which attaches a hash to the software. When the code is executed, this hash can be used to verify with great certainty that the code was signed with that particular certificate and hasn’t changed since it was signed. Because drivers have very privileged access to the host operating system, the most recent releases of Microsoft Windows (Vista, Win7, Win2008, and Win2008 R2) won’t allow the silent installation of unsigned drivers. The Stuxnet attackers put a lot of effort into developing a completely silent infection process. Stuxnet checked which Windows version it was running on and which anti-virus software (if any) was running and tailored its infection process accordingly. The entire purpose of the Windows components of Stuxnet was to seek out installations of a specific industrial control system and infect that. To achieve that purpose, the Windows components were carefully designed to give infected users no sign that they were under attack.

The revocation of the first certificate by VeriSign didn’t change any of that. Windows will happily and silently install drivers with revoked signatures. Believe it or not, there are actually good reasons for Windows to install drivers with revoked signatures. For example, Realtek is an important manufacturer of various components for PCs. If Windows refused to install their drivers after the certificate was withdrawn, there would be a whole lot of unhappy customers.

The release of a Stuxnet driver signed with a new certificate was very curious for several reasons. As Symantec recently reported [link to large pdf], no one has recovered the delivery mechanism (the Trojan dropper, in antivirus lingo) for this driver. We don’t actually know how the driver showed up on the two machines (one in Kazakhstan and one in Russia) where it was found on July 17, 2010. This is significant because the driver is compiled into the Trojan dropper as resource. Without a new dropper, there’s no way for that version of the virus to have infected additional computers. And there is no evidence that I’m aware of that Stuxnet with the new driver ever spread to any other machines.

The release of the newly signed driver did exactly one thing: Increase publicity about Stuxnet. The inescapable conclusion is that the Stuxnet attackers wanted to make headlines in July 2010. As Holmes says in Silver Blaze, “one true inference invariably suggests others”. From this one inference, we can begin to understand the most puzzling parts of the Stuxnet project. Who would publicize their secret cyber attack on an enemy? Why were there clues to the identity of the attackers left in the code? Why did the last version of Stuxnet use multiple 0-day exploits? Why did the attackers only take minimal steps to hide the true nature of the code? The answer to these questions is relatively simple. The Stuxnet project was never intended to stay secret forever. If it had been, there would never have been a new Stuxnet driver in July 2010. That driver helps put all the other pieces in context:  the clues left inside the code (“myrtus”, “guava”, and using May 9, 1979 as a magic value); the aspects of the code that have led various experts to label Stuxnet as amateurish, lame, and low quality; even the leak campaign by the U.S. and Israeli governments to unofficially take credit for Stuxnet. Rather than being mistakes, these were elements of the larger Stuxnet project.

Stuxnet was more than a cyber attack. It was a multi-pronged project. The design of the code supports the overall mission. The mission included a publicity campaign, or as the military and intelligence folks style it, a PSYchological OPeration (PSYOP). Unlike a typical malware attack, Stuxnet had (at least) two distinct phases. Phase 1 required a stealthy cyber attack against the Iranian nuclear program. Phase 2 required that the effects of that cyber attack become widely known while giving the perpetrators plausible deniability. That may seem a little strange at first, but if you put yourself in the shoes of the attackers, the strategy is more than plausible.

In fact, the attackers have explained it all. Take a look back at the story told in the New York Times article on January 15, 2011. According to the NYT, the Stuxnet project started as an alternative to an Israeli airstrike:

Two years ago, when Israel still thought its only solution was a military one and approached Mr. Bush for the bunker-busting bombs and other equipment it believed it would need for an air attack, its officials told the White House that such a strike would set back Iran’s programs by roughly three years. Its request was turned down.

Couple that statement with the reason the article appeared when it did:

In recent days, American officials who spoke on the condition of anonymity have said in interviews that they believe Iran’s setbacks have been underreported.

Imagine that you’re an American policymaker who has to choose between launching a cyber attack and allowing a close ally to launch an actual military attack. If you choose the cyber attack option, how will anyone know that you’ve succeeded? If no one knows that you’ve successfully delayed the Iranian nuclear program, you’ll be vulnerable to right-wing attacks for not doing enough to stop Iran and the pressure to bomb-bomb-bomb of Iran will grow. There’s another reason to publicize the attack. If you’re a superpower who starts a cyber war, you have to realize that your country contains a lot of very soft targets. You would want to make a big splash with this malware so that your industrial base starts to take the cyber war seriously. So, from the very beginning, the project included planning for the inevitable discovery and understanding of the Stuxnet malware. Just like the spread of the malware itself, the psyop will be impossible to directly control, but easy enough to steer in the appropriate direction. The attackers likely didn’t know it would be Symantec and Ralph Langner who would start to unravel the exact nature of the Stuxnet malware, but they knew someone would. And they knew they would be able to get the New York Times to print the story they wanted to get out (I’m not demeaning the work of the reporters on this story, but I would hope they realize that there is a reason they aren’t being investigated for publishing a story about our efforts to undermine Iran’s nuclear program and James Risen was).

image_print
  1. JTMinIA says:

    This is great…. With that said: PSYOPs, not PYSOPs, although I am often pyssed off at my colleague.

    One question. I’ve read that Aaron Barr had a copy of the Stuxnet code. (Kayla found it when she hacked HBGary Federal.) Was he involved or just trying to learn?

    • WilliamOckham says:

      Almost everybody in the antivirus, code security, and spook communities had a copy. HBGary was apparently doing a little competitor research…

      • JTMinIA says:

        Thanks. That’s what I thought. I mean, think about it: do you really think Aaron Barr could have anything to do with any malware that actually worked?

        • WilliamOckham says:

          I’ve seen that. It’s really hard to figure out how to take some of that stuff. HBGary proper (as opposed to HBGary Federal) seems to have had at least a few pretty good tech folks. What Hoglund proposed to do was interesting, but could they pull something like that off? I have no idea.

          • NMvoiceofreason says:

            Isn’t the work that they were proposing a federal crime?

            Cybercrime

            Federal Criminal Code Related to Computer Intrusions

            * 18 U.S.C. § 1029. Fraud and Related Activity in Connection with Access Devices
            * 18 U.S.C. § 1030. Fraud and Related Activity in Connection with Computers
            * 18 U.S.C. § 1362. Communication Lines, Stations, or Systems
            * 18 U.S.C. § 2510 et seq. Wire and Electronic Communications Interception and Interception of Oral Communications
            * 18 U.S.C. § 2701 et seq. Stored Wire and Electronic Communications and Transactional Records Access
            * 18 U.S.C. § 3121 et seq. Recording of Dialing, Routing, Addressing, and Signaling Information

            Isn’t hiring people to do this a criminal conspiracy?

            • WilliamOckham says:

              It wouldn’t be a crime unless they used it in the U.S. (AFAIK, there are lawyers here who can correct me if necessary). I don’t think writing software in and of itself can be a crime (unless it involves certain kinds of porn). I could be wrong.

          • Rayne says:

            Yeah, no idea — although it’s interesting to note that less than a month after the researcher emailed Hogland about the rootkit request, the White House email system crashed (Feb. 3, 2011). Quel coinky-dinky.

            Have you also seen this piece about the Finnish-Chinese connection?

            While the author makes a good case for China’s motivation, there’s two other motives. Iran is sitting on top of one of the largest deposits of natural gas in the world; the nuclear energy program is intended to allow Iran to sell their fossil fuel deposits for cash while providing electricity to the country. As supplies of oil tighten, natural gas will become far more important as an alternative. Secondly, the Russians have been supplying a substantive part of the nuclear program to Iran for over a decade; tweaking this program would also tweak Russia as well.

            And we don’t know what kind of equipment North Korea had in their program or the possible impact Stuxnet might have had on their nuclear development work, right in China’s backyard.

            • WilliamOckham says:

              I think that guy’s theories are nonsense. Why would the U.S. and Israel be taking credit for Stuxnet (anonymously to the NYT) if this was a Chinese operation. We know a lot about Chinese network attacks and this one looks nothing like what we’ve seen before.

              One of the things that rarely gets mentioned is that the U.S. has very little dependence on Siemens PLC’s, outside of a bit of oil and gas industry stuff. We are much less vulnerable to the unsophisticated Stuxnet copycats than Europe or China. It was much safer for us to this than almost any other nation-state with the capability.

              • Rayne says:

                Are you sure about Siemens’ sales volume of PLC equipment in the U.S.? Sure, we have Allen Bradley and other suppliers to rely on, but I can buy both and other PLC equipment through the same distribution networks in the U.S.

                • WilliamOckham says:

                  Sales figues I saw had Siemens at 31% of the global market (largest share), but only about 10% in the U.S. Rockwell (Allen-Bradley) dominates the U.S. market.

                  • Rayne says:

                    Thanks, please post a link if you have something public on that. Rockwell does some distribution of other than AB if I recall correctly.

                    Should point out that Vacon started construction of a plant here in U.S. in Dec. 2008. Many of these firms have numerous points of overlap. Vacon was also founded by former ABB employees — and I’m sure you’ll see ABB’s name come up in that market share report. At one point during the Bush admin, Goldman Sachs held a sizable chunk of Vacon stock, not clear how much they may hold now. GS also issued a Buy statement on Siemens last September; not clear if they took a stake, but if they’re going to make a market why not buy into it?

  2. eCAHNomics says:

    Let me restate your argument in my own words to see if I understand it.

    Phase I was to set back Iran’s nuke program, which it was successful at.

    Phase II was to publicize that success so that Israel wouldn’t launch military attack, it would shut up U.S. Iran hawks, and it would alert soft U.S. software producers that they could be targets so they’d do something about it.

    Will it succeed at the last part of Phase II?

    • WilliamOckham says:

      The only clarification I would make to that is that the real target in the U.S. was the critical infrastructure folks (pipelines, chemical plants, power plants, etc.) who have tons of SCADA/PLC systems that are highly vulnerable to this sort of attack.

        • WilliamOckham says:

          I don’t think the problem is that they don’t this stuff seriously. The folks responsible for the security of these types of systems have so few resources and the problems are so large that I think they are overwhelmed.

          • eCAHNomics says:

            Aha. So my speculation at 13 is more generic.

            I’m not surprised to hear that the security folks don’t get the resources they need.

            Part of the problem is the low-probability-high-consequence event, which is usually not handled well.

            If that’s the case, then hackers will turn it into a high-probability-high-consequence event.

            Then we’ll see.

            • marksb says:

              Thanks WO, great post!

              In my experience when the IT or engineers go into executive meetings and ask for a large chunk of change for enhanced security they get a kneejerk reaction of “NO!”. When the engineers go on about the need, the execs fall back to a combination of budget issues and the lack of a history of being hacked.

              No matter that the engineer can paint a vivid picture of the disaster that certainly awaits the unprepared; it ain’t happened yet, so it’s not something to worry about now.

              It’s all about the money, and IT is already seen as a huge cost center. Adding to that budget is counter to what most execs are working toward: slashing IT costs. They really hate IT costs, drives ’em up the wall.

              They or someone else in their industry have to get hit–hard–for them to wake up and smell the coffee.

              (On the other hand, when an engineer is running the company, the security is usually built in to the company’s core code.)

              • eCAHNomics says:

                Thanks for the flavor. I am not surprised at what you say.

                So why are there not more attacks? A small supply of talented hackers?

                • PierceNichols says:

                  Stuxnet was created by a team whose core members are very, very good. The core half a dozen members were drawn from a talent level that probably has less than a hundred members, worldwide. Most of these people are white hats — while most tend toward the anarchist side of things, they aren’t, as a rule, terrorists.

                  Further, the Stuxnet team required extensive intelligence and technical support of excellent quality. That’s what really makes it clear this was a gov’t levl job. Private actors don’t have the support required to be so surgically precise.

                  • WilliamOckham says:

                    Good points. My impression of the structure of the code was that it looks a lot more like the output of hard core systems devs than typical hackers.

              • eCAHNomics says:

                BTW, I only ever had one personal interaction on this subject. It had to have been in the early 80s, when I was at Goldman Sachs, but as dated as that experience was, it does illustrate your point. I was an economist. I knew nothing about computer security. We were on remote terminals connected to a CPU, typing 10CPS.

                Some computer security salesman came to the firm to talk about his product. He was foisted off on me. The only way I would have gotten that assignment is if my higher ups thought the whole subject was a waste of time.

                • Larue says:

                  Love yer stories from back in the day, thanks and PLEASE continue to share them . . . they amuse, but enlighten also, to no end . . . and validate. That validate is important to me at times . . . ;-)

                  “I was right, they ARE out to get us!”

                  lol

              • PierceNichols says:

                This was why Eric Butler created Firesheep — to force big internet companies to act on a vulnerability that has been common knowledge for a decade. Suddenly, everyone’s rolling out https everywhere, because that kills Firesheep dead.

                Stuxnet may have a similar effect on industrial plant security. Sniffing 2.4 GHz and 900 MHz in the vicinity of a large chemical plant or refinery can be an eye-opening and terrifying experience. And those guys hate to roll their systems — to the point where DOS is going to outline Windows b/c of it. It’s completely understandable — downtime at a big industrial operation can cost a megabuck per hour, there’s no sufficiently accurate model system on which to test, and the potential failure modes can be catastrophic.

            • Larue says:

              Part of the problem is the low-probability-high-consequence event, which is usually not handled well.

              If that’s the case, then hackers will turn it into a high-probability-high-consequence event.

              Then we’ll see.

              Yes!

              Same thing permeates most business . . . lack of planning for the worst (while hoping for the best) brings down lots of folks, in the most unsuspected of ways.

              Ya gotta have doomsday dreamers as part of yer team if ya wanna stave off doom . . . PR folks used to fulfill this, as did the communications folks, but they all got hacked up budget wise long ago and only report and plan what’s ordered to be reported and planned.

              LeSigh.

              • eCAHNomics says:

                That’s the other reason why I think Wall St. is so vulnerable. They never plan for the worst case. Only do what they did yesterday, as long as it made money yesterday. Paid NO attention to economic biz cycles (my area).

                Of course, Wall St no longer has any reason to fear from bad biz practices or biz cycles, as they own the USG which will bail them out.

                Computer hackers, mebbe not so much possibility of a bailout.

                • Larue says:

                  Wall St. owns it all I would speculate (in a sense) . . . because the REALLY rich elites use it as a tool to game THEIR rapacious global looting and they let Wall St. skim its share . . . without The Wall, game over for elites.

                  Really a sick n twisted n corrupted system this planet has let itself get into.

                  But I digress . . . my bad.

      • shekissesfrogs says:

        eCahn’s laid out the points, and your answer weakens your theory, IMHO, and reddog says these SCADA/PLC systems are not “highly vulnerable” as you say.

        I can’t see the US Gov. taking the chance of sabotaging our own infrastructure to assuage Israel, and with this theory the goal has turned into a scattershot. I can’t see these risks as side benefits.

        “Rolling out the new driver incurred a substantial cost, and not just in monetary terms”
        How did you come to this conclusion?

        I can see publicizing an attack. The Mossad’s reputation is in the toilet, PR and murder what they do (not so well)

  3. JTMinIA says:

    If the goal of Phase II is to raise awareness of US vulnerabilities, isn’t it amusing that what Anonymous did to HBGary was probably ten times as effective? Or do we now go for our level-17 tin-foil hats and start asking if Anonymous is really a gov’t operation?

      • NMvoiceofreason says:

        The question, if not NSA, is how the public key was broken – twice.

        Does PKI have a hole we don’t know about?

        • WilliamOckham says:

          The most likely explanation is that an intelligence agency pulled off a black bag job against Realtek and JMicron and stole the two certificates. If I were to run this sort of operation, that’s what I would have done. Most companies keep their digital certificates on removable media and locked up when not in use.

          • jdmckay0 says:

            black bag job against Realtek and JMicron and stole the two certificates.

            Could be… who knows.

            I can also envision other scenarios. For one, Realtek & JMicron both heavy weights, integral to cooperation w/M$ in OS design, procedures etc. Given how interwoven these companies have become w/gov snooping foibles, not so hard for me to imagine a call to Realtek from HOMELAND SECURITY:

            guys, we need a couple of your certificates to protect the American people & prevent a mushroom cloud over NY.

            When I was integrally involved in MS OS’ security, I became convinced they’d built in backdoors for just this kind of stuff.

            Most companies keep their digital certificates on removable media and locked up when not in use.

            The money quote being “when not in use”, which is not often.

            I saw over & over certificate storage strategies which we broke in our own little labs… common place & oft implemented “stuff” done by competent code monkeys who needed to “get the job done”, but really didn’t understand the vulnerabilities at all. Can’t tell u how many times I saw this.

            Schneier’s Applied Cryptography was the beginning point bible for all this, and explicitly discussed this very issue… designing/implementing storage of certificate info, hashes, matching pairs etc. etc. Yet, huge majority of guys in my sector never read the damn thing, let along followed to logical conclusion in order to really secure these things.

            This endeavor was about an 18 mos. sidetrack for me… far far more time/effort then I planned. To do it right, however, there was no choice. Essentially, we had to build all our key storage from scratch.

            I used to tell others in encryption forums who needed/wanted to get something out the door quick, that this particular area was one where a little bit of knowledge can get you in a lot of trouble.

            • WilliamOckham says:

              I agree with you on the cryptography stuff. I tried to put myself in the shoes of an American policymaker. If you have an experienced black bag team and no real fear of consequences, sending them in is pretty easy. If you are a typical hacker or corporate spy, a network attack is safer and easier.

        • PierceNichols says:

          Breaking PKI would be much harder than just going ahead and stealing the certificates. I bet the reason those two companies got hit is b/c they were careless and someone snarfed the certificates remotely.

    • earlofhuntingdon says:

      The publication would certainly elicit scarce budget resources to deal with a newly publicized problem, resources that might otherwise remain unavailable to the group of contractors that would rush to utilize them. If those vendors are publicly held, or their shares are traded privately, it would be relatively easy to add a stock manipulation scheme to this, and make money in what is essentially a rigged or insider’s game.

      • eCAHNomics says:

        I speculated on a HBGary thread yesterday that one could probably do what anonymous did to any Wall St. firm & bring it down just as easily. My guess (really a guess as I have NO real knowledge) is that all their ‘proprietary’ and generic software has very poor security. I’m guessing that bc I observed for all the years I was there that they have very little interest about anything that won’t make money today, and farm out routine chores to people who do them mechanically.

        • one_outer says:

          I speculated on a HBGary thread yesterday that one could probably do what anonymous did to any Wall St. firm & bring it down just as easily. My guess (really a guess as I have NO real knowledge) is that all their ‘proprietary’ and generic software has very poor security.

          I’ll second that. Last year I temped for the some of the MOTU in a brokerage call center. I signed things, so I can’t go into detail. Let’s just say that your suggestion that it would be fairly simple is extraordinarily plausible. They have huge, gaping security holes.

          And it’s not just the computer systems. Since they have no respect for labor, they employ tons of temps at all times. Where I was I’d guess about 3 in 10 of a staff of about 300 are contractors/temps. They come in through a low level headhunting firm. They do a criminal background check, but that’s it. The kind of information available to even the lowest level temp is just unbelievable. Their HR security practices aren’t exactly effective.

          I behaved myself very well while I was there but it’s only a matter of time before there’s a Wall Street/big bank Bradley Manning.

          • eCAHNomics says:

            Good points, all of them. They had fewer temps when I was there (11 years ago), but I’m not surprised to hear that there are more & more temps over that period.

            And your observations square with my intuition.

      • Larue says:

        THAT sums up in toto the depth of the level of corruption in our lives, country, world.

        Love it, thanks . . . . anything can, and is, be/ing gamed.

        The Wall Street has to go . . . it unto itself is the tool for the looting to be done by the elite.

        Thanks again EOH . . .

        N thanks WO, for a GREAT diary full of facts, intrigue, and the little things of life most of us plebian folks never dream of . . .

        *bows*

  4. earlofhuntingdon says:

    (I’m not demeaning the work of the reporters on this story, but I would hope they realize that there is a reason they aren’t being investigated for publishing a story about our efforts to undermine Iran’s nuclear program and James Risen was).

    Officially planted leaks are not leaks, but domestic PsyOps and possibly illegal. Mr. Risen was operating with actual leaks and leakers, relatively powerless actors mightily concerned about the possibly illegal, probably immoral and unethical behavior they witness.

  5. JTMinIA says:

    My understanding is that some parts of the malware world are even worse than stock-manipulators, as in people making malware and then selling the patch to protect you from it. Like McDonalds opening a fat-farm health spa. That sort of thing.

  6. Arbusto says:

    Next step; Iran suing Siemens for merchantability and the US indemnifying them (if indemnify is word I’m looking for). Still cheaper overall than an air strike.

  7. PeasantParty says:

    Mr. Ockham, thank you so much for accepting EW’s invitation and giving us so much info.

    I have one question:

    Do you think the Pentagon, security person that was found murdered and thrown in the dump knew of this?

  8. pdaly says:

    WO, nice read.

    Could you elaborate more on the magical date aspect?
    How was the date written? in international order (day/month/year) or in the conventional American order (Month/Day/Year)
    For example 5/9/1979 is that May 9 or September 5th ? How do we know?

    Do American programmers routinely use one or the other date writing convention? How about non-American programmers?

    (BTW, Sept 5, 1979 The Grateful Dead performed at Madison Square Garden otherwise not much that I could find that happened on this date)

    • WilliamOckham says:

      The malware checked for the existence of a particular key/value pair in the Windows Registry to know if it had already infected a machine. I’ll outsource the description to the Symantec dossier that I linked to earlier:

      If this value is equal to 19790509 the threat will exit. This is thought to be an infection marker or a “do not infect” marker. If this is set correctly infection will not occur. The value may be a random string and represent nothing, but also appears to match the format of date markers used in the threat. As a date, the value may be May 9, 1979. This date could be an arbitrary date, a birth date, or some other significant date. While on May 9, 1979 a variety of historical events occured, according to Wikipedia “Habib Elghanian was executed by a firing squad in Tehran sending shock waves through the closely knit Iranian Jewish community. He was the first Jew and one of the first civilians to be executed by the new Islamic government. This prompted the mass exodus of the once 100,000 member strong Jewish community of Iran which continues to this day.” Symantec cautions readers on drawing any attribution conclusions. Attackers would have the natural desire to implicate another party

  9. Mauimom says:

    Is there a link to a discussion providing background on all this.

    Even as a faithful reader of FDL, I don’t know what’s leading up to this.

    • NMvoiceofreason says:

      The previous HBgary items would be helpful:Damn it feels good to be a Gangsta
      Protect BOFA from Wikileaks
      Protect Chamber of Commerce
      #
      etc.etc.etc
      #
      HBGary | Firedoglake
      Feb 16, 2011 … Reader DL found a very interesting email among the HBGary emails: Chet Uber emailed — after having tried to call — HBGary CEO Gary Hoglund …
      firedoglake.com/tag/hbgary/
      #
      The HBGary Scandal: Using Counterterrorism Tactics on Citizen …
      Feb 14, 2011 … As I described on the Mike Malloy show on Friday and as Brad Friedman discusses in his post on being targeted by the Chamber of Commerce, …
      emptywheel.firedoglake.com/…/the-hbgary-scandal-using-counterterrorism-tactics-on-citizen-activism/
      #
      HB Gary Federal | Firedoglake
      Feb 9, 2011 … On Saturday, private security firm HBGary Federal bragged to the FT that it had discovered who key members of the hacking group Anonymous …
      firedoglake.com/tag/hb-gary-federal/
      #
      HBGary | Emptywheel
      Feb 14, 2011 … One of the more interesting documents on HBGary et al’s partnership with the Chamber of Commerce details the prices they wanted to charge. …
      emptywheel.firedoglake.com/tag/hbgary/
      #
      HBGary Federal | Firedoglake
      Feb 10, 2011 … I noted yesterday the mind-numbingly ignorant analysis of Glenn Greenwald’s motivation as a careerist hack that was provided by HBGary. …
      firedoglake.com/tag/hbgary-federal/
      #
      Chet Uber Contacted HBGary before He Publicized His Role in …
      Feb 16, 2011 … A reader found a very interesting email among the HBGary emails: Chet Uber emailed–after having tried to call–HBGary CEO Greg Hoglund on …
      emptywheel.firedoglake.com/…/chet-uber-contacted-hbgary-before-he-publicized-his-role-in-turning-in-bradley-manning/
      #
      HB Gary | FDL News Desk
      Feb 15, 2011 … Tags: traditional media, Wikileaks, Chamber of Commerce, Bank of America, HB Gary, dirty tricks, social media …
      news.firedoglake.com/tag/hb-gary/
      #
      Security Contractor HBGary Tries to Protect US from Anonymous …
      HBGary Federal, provider of classified cybersecurity services to the Department of Defense, Intelligence Community and other US government agencies, …
      my.firedoglake.com/kgosztola/tag/julian-assange/
      #
      WikiLeaks Documents Show Chamber Paid HBGary to Spy on Unions …
      DV.load(‘http://www.documentcloud.org/documents/32352-master.js’, { width: 950, height: 800, container: “#DV-viewer-32352-master” });
      firedoglake.com/…/wikileaks-documents-show-chamber-paid-hbgary-to-spy-on-unions/
      #
      Emptywheel
      A reader found a very interesting email among the HBGary emails: Chet Uber emailed–after having tried to call–HBGary CEO Greg Hoglund on June 23, 2010. …
      emptywheel.firedoglake.com/
      #
      HB Gary Email about US Chamber and Change to Win | Firedoglake
      DV.load(‘http://www.documentcloud.org/documents/33668-chamber35important.js’, { width: 950, height: 800, container: “#DV-viewer-33668-chamber35important” })
      firedoglake.com/…/hb-gary-email-about-us-chamber-and-change-to-win/
      #
      Previous Entries – Emptywheel
      Feb 11, 2011 … HBGary Fees: “Dam It Feels Good to Be a Gangsta” … One of the more interesting documents on HBGary et al’s partnership with the Chamber of …
      emptywheel.firedoglake.com/page/2/
      #
      CIA Asks HB Gary About “Interesting and Timely Capabilities …
      DV.load(‘http://www.documentcloud.org/documents/69694-fw-capabilities.js’, { width: 950, height: 800, container: “#DV-viewer-69694-fw-capabilities” });
      firedoglake.com/…/cia-asks-hb-gary-about-interesting-and-timely-capabilities/
      #
      My FDL | kgb999 | Activity
      HBGary said they couldn’t/wouldn’t turn all the information collected on Anon … Did you see the dustup between HB Gary and Anon regarding the former’s …
      my.firedoglake.com/members/kgb999
      #
      HB Gary Email: “Dam It Feels Good to Be a Gangsta” | Firedoglake
      DV.load(‘http://www.documentcloud.org/documents/43776-email-gangsta.js’, { width: 950, height: 800, container: “#DV-viewer-43776-email-gangsta” });
      firedoglake.com/…/hb-gary-email-dam-it-feels-good-to-be-a-gangsta/
      #
      My FDL | pajarito | Activity
      pajarito commented on the blog post The HBGary Scandal: Using Counterterrorism Tactics … Little wonder that several agencies flocked to this HBGary hack, …
      my.firedoglake.com/members/pajarito
      #
      HBGary’s Aaron Barr Discusses Data Mining FARC | Firedoglake
      DV.load(‘http://www.documentcloud.org/documents/69703-re-data.js’, { width: 950, height: 800, container: “#DV-viewer-69703-re-data” });
      firedoglake.com/documents/hbgarys-aaron-barr-discusses-data-mining-farc/

    • WilliamOckham says:

      If you want a simple introduction to Stuxnet, the NYT article isn’t bad, as long as you understand it is part of the government’s effort to shape the story.

      • Mauimom says:

        Thanks.

        I keep reading “Stuxnet” as “Sux[sucks]net.”

        Coincidence?

        And pdaly@33, thanks to you too.

    • pdaly says:

      and the brief summary about the significance of the stuxnet virus, from wikipedia:

      Different variants of Stuxnet targeted five Iranian organisations, with the probable target widely suspected to be uranium enrichment infrastructure in Iran; Symantec noted in August 2010 that 60% of the infected computers worldwide were in Iran. Siemens stated on November 29 that the worm has not caused any damage to its customers, but the Iran nuclear program, which uses embargoed Siemens equipment procured clandestinely, has been damaged by Stuxnet

    • Mary says:

      Jon Stewart had a guest on last night talking about it – interesting stuff and they touched sideways on some of what WO mentions (re: US soft targets).

  10. by foot says:

    Of botanical interest …

    the clues left inside the code (“myrtus”, “guava”, and using May 9, 1979 as a magic value)

    Guavas are in the botanical family Myrtaceae.

    I’m curious what other silly clues were found.

  11. Mary says:

    Super piece – it showed me a lot, as low tech as I am. There’s never enough from you in comments, WO – I’m really glad to get a chance to read a full piece that can say what you want to say but with enough fill in it for the lower tech readers like me to be able to follow.

  12. dustbunny44 says:

    WO, thank you for this info on the recent history of stuxnet implementation.
    Is there any consensus on the current status of stuxnet and/or the cetrifuges – is it still effective and being “remotely controlled”, or has it been wiped from the systems or the hardware/firmware replaced, or the worm replaced by some other trojan software, or -?

    • WilliamOckham says:

      Iran seems to have the situation under control, but they will always be looking over their shoulders.

  13. papau says:

    If the IT can’t be farmed out to India companies like the Metlife do not want to hear about the project (that could reflect the fact that the “new head” of IT at the Met is from India). Indeed when he first came in the internal web development was sent to India and was “upgraded to “2.0” and when rolled out had had minimal QC – so that every machine froze with an unreadable screen. Our programmers were reduced in number – and the overall cost was “GREAT” and we only lost a couple of days of the internal web – and who needs all that HR nonsense anyways!- and he got a massive bonus for saving us money. :-)

    The code is interesting of course – but the insertion method is of even more interest. I wonder if we will ever know how it was done (the who does not interest me – but that is just me no doubt).

    • eCAHNomics says:

      Hmmmm. Outsourcing IT to India & firing domestic programmers creates a lot of disgruntled potential hackers.

      • PierceNichols says:

        The people capable of putting together something like Stuxnet all have jobs and field regular headhunter calls. It’s only lower-level guys who are getting pushed out in favor of Indian outsourcers.

  14. Larue says:

    Last comment of thanks to all, EW to WO to all those in comments who shared incredible ‘inside’ info and knowledge with folks like me who are lower tech minded.

    I love the intrigue of international politics mixed with corporate fascist servitude and the ravages of corruption thru it all.

    N you folks have given me lots of validation with regards to the depth of depravity I believe this planet has fallen to (yeah, I know, always been like this).

    *bowstoall*

  15. quanto says:

    William,
    I read an article a week ago on the Stuxnet virus but I can’t seem to find it again so I’ll have to go by memory.
    In it they stated that the virus looked for a certain combination of PLC’s that would be used in the centrifuges that Iran was using. Supposedly these are outdated and no one else but North Korea would be using them.

    They surmised the virus was an American/ Israel joint effort because they said the U.S was able to get the schematics for the Siemens PLC’s and Israel had access to the same centrifuges that Iran was using to test the virus on.

    They also stated that they didn’t think the virus was suppose to get out into the wild figuring the nuclear site network would be a closed system.

    This might have been all speculation for why I cannot seem to find the site again and it got pulled , or there was too much truth to it.

    It would make sense though if they thought the virus would remain in Iran and it got out they would need some way to bring it to attention to everyone without directly pointing a finger at themselves.

  16. reddog says:

    Actually, SCADA networks are not as vulnerable as you might think. Yes, I work (or worked, up until retirement) with power production SCADA systems. In our processes we had only one Siemens PLC and it controlled an ancillary system. Furthermore, the PLC was not even accessed with Siemens software, but by software developed by a third party. All SCADA systems are (and should be) “air gapped” from the internet, and run on a local network with no internet access and no physical wiring in place in order to preclude any accidental connection. No computer, laptop, etc. with internet access is allowed to connect to the SCADA network. No external drive that is plugged into any SCADA network computer is allowed to ever be connected to any network other than the SCADA network, with the exception of two network administrators who may connect a drive for the sole purpose of system updates. This update is recognized as a possible vector for malware so extra precautions are taken, and the awareness of the act as being a possible problem is the first line of defense.

    The short of the story is that the Iranian SCADA administrators had their heads up their a$$es and if proper protocols had been developed and enforced this wouldn’t have happened to them.

    Furthermore, hyping the vulnerabilities of SCADA systems is just that–hype. Properly administered SCADA systems are, from a relative standpoint, quite secure and extremely difficult to penetrate. Not impossible, but with a very low probability of a successful attack.

    • quanto says:

      Reddog,
      One other thing that was mentioned was the PLC’s password was hard coded.
      Do you know if the virus would have to crack that in order to infect and if so wouldn’t the PLC’s be usless now in their system and have to be swapped out to stop a reinfection.

      • reddog says:

        I suspect that is the case and that the guilty intelligence agency had a mole in place. If there is keyboard access it is far more difficult, if not impossible, to fully protect a system. However, proper procedures will at least allow forensic tracking of who plugged in what and when and with critical infrastructure such logging should be done because it is, in itself, a deterrent.

        • PierceNichols says:

          Considering how willing people are to insert strange USB drives into their machines, I don’t see why that’s necessary.

        • PierceNichols says:

          And one more thing — if you can trick someone into inserting a strange USB device into their machine (demonstrably easy), you have keyboard access.

        • PierceNichols says:

          For example, the USB drive on my keychain (a pretty ordinary specimen) could readily hold a 35x12x3mm circuit board in addition to the flash memory, which can reside within the USB connector itself (see, for example, http://www.sharksystems.com/browse/currency-USD/product/130981/).

          You can have an awful lot of fun inside those dimensions with modern semiconductors, especially if you put components on both sides of the PCB and use the smallest possible packages.

        • fritter says:

          Who ever developed stuxnet had intimate knowledge of the system they were trying to infect. Not just the software that would be running the system, but the system itself. They had to have had the source code for the PLC program at the very least.

          I don’t think logging anything would be very helpful at that point. I don’t think SCADA systems are as robust as you think them to be (you don’t roll out the latest security updates to your SCADA, they are generally far behind), but in this case it wouldn’t have mattered.

    • WilliamOckham says:

      I agree with you that SCADA/PLC networks can be protected. I would just add application whitelisting to what you describe. There are products out now that will prevent any unknown software from executing on your systems. Unfortunately, very many systems don’t take these precautions. Just add to that one of the biggest trends in the industry, wirelessly networked sensors, and you have serious problems.

      The Stuxnet attackers had excellent intel about their target. They knew USB drives were in use and that was the vector of infection. I would not want to have to defend a network against the U.S. Government. It can be done, but it is a huge PITA. I have friends who are defending networks against the semi-official Chinese hackers. That is NOT easy either.

      • PierceNichols says:

        USB drives are a very useful vector against damn near everyone. In field tests, better than 90% of USB drives dropped at a company site have been inserted into a computer within a week.

        Many people haven’t figured out that you need to turn Autorun off and fucking lock it, but that’s only really protection against script kiddies. If your attacker has access to purloined certificates, there are far subtler attack methods.

    • PierceNichols says:

      Your faith in your procedures is touching… but it’s cargo cult security. For instance, there’s plenty of wireless SCADA systems out there… and the security on them is often atrocious to the point of hard-coded passwords. That’s probably the worst widespread example, but there are plenty of others. Real SCADA systems, especially the big ones, often have gaping holes that industrial controls engineers are completely blind to, because they’ve never learned to think that way. ‘Properly administered’ just doesn’t exist in the wild.

      SCADA systems are not a script-kiddie target; people interested in attacking them are able to bring substantial resources to bear, especially wrt to site surveillance and intelligence.

  17. shekissesfrogs says:

    what if there were errors in the first one, so they released a second one? Industrial espionage gone awry?

    http://www.schneier.com/blog/archives/2010/10/stuxnet.html

    By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

    [Could it be a] worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

    One thing is true, no one else seems to matter much to the Israelis but themselves.

  18. Professor Foland says:

    There’s a good reason to want to semi-publicly take credit for bringing down the centrifuges. It changes the calculus of project management risk.

    If you’re the Minister of Getting-Iranian-Centrifuges-to-Work, for several years, your scientists have been coming to you and telling you tales of woe and setbacks; and Ministers-of-Such-Things are generally not very technical. “Well,” you’ve been thinking to yourself, “nobody said this would be easy. We have no choice but to keep plugging away at it, eventually the scientists will figure it out.” But now that you know you’re under attack, you think, “Geez, no matter what we do, this is going to keep happening. Stuxnet today, Son-of-Stuxnet tomorrow.” So the odds are much better that you’ll bring the program to an end.

    If you’re the virus-maker, it’s nice to have them waste a lot of time and resources, but you really do want them to stop eventually. Otherwise, they might actually figure out a way around your viruses.

  19. Jane Hamsher says:

    Thanks so much WO, fascinating.

    In the HBG emails, they are always talking about “attribution” and “misattribution.” Is this what they are talking about?

    • WilliamOckham says:

      In that context, attribution and misattribution are terms of art for discovering and hiding (respectively) who is responsible for some act. Misattribution sounds so much nicer that “false flag” operations, but it is pretty much the same thing.