[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

On the Kaspersky Hack

When the news first broke that Kaspersky had found NSA’s hacking tools on the computer of a TAO employee working at home, I recalled that Kaspersky had revealed it had gotten hacked in June 2015, right around the time of this breach (and after Kaspersky released a series of reports on US, British, and Israeli spying). Last night, the NYT reported that Israel discovered NSA documents on Kaspersky’s systems while they were hacking the Russian antivirus company.

Israeli intelligence officers informed the N.S.A. that in the course of their Kaspersky hack, they uncovered evidence that Russian government hackers were using Kaspersky’s access to aggressively scan for American government classified programs, and pulling any findings back to Russian intelligence systems. They provided their N.S.A. counterparts with solid evidence of the Kremlin campaign in the form of screenshots and other documentation, according to the people briefed on the events.

The WaPo, matching NYT’s story, has yet another ridiculous explanation for why the TAO employee was working at home (though one that probably gets closer to the truth than the other three given thus far),

“There wasn’t any malice,” said one person familiar with the case, who, like others interviewed, spoke on the condition of anonymity to discuss an ongoing case. “It’s just that he was trying to complete the mission, and he needed the tools to do it.”

But the WaPo also reveals that the National Intelligence Council completed a report last month judging that FSB likely had access to Kaspersky’s source code.

Late last month, the National Intelligence Council completed a classified report that it shared with NATO allies concluding that the FSB had “probable access” to Kaspersky customer databases and source code. That access, it concluded, could help enable cyberattacks against U.S. government, commercial and industrial control networks.

Those scoops have drowned out this one from Cyberscoop, which explained that the reason the US first came to suspect Kaspersky is because the FSB told the US to stop snooping on the antivirus firm.

In the first half of 2015, Kaspersky was making aggressive sales pitches to numerous U.S. intelligence and law enforcement agencies, including the FBI and NSA, multiple U.S. officials told CyberScoop. The sales pitch caught officials’ attention inside the FBI’s Counterterrorism Division when Kaspersky representatives boasted they could leverage their product in order to facilitate the capture of targets tied to terrorism in the Middle East. While some were intrigued by the offer, other more technical members of the intelligence community took the pitch to mean that Kaspersky’s anti-virus software could effectively be used as a spying tool, according to current U.S. intelligence officials who received briefings on the matter.

The flirtation between the FBI and Kaspersky went far enough that the bureau began looking closely at the company and interviewing employees in what’s been described by a U.S. intelligence official as “due diligence” after Counterterrorism Division officials viewed Kaspersky’s offerings with interest.

The examination of Kaspersky was immediately noticed in Moscow. In the middle of July 2015, a group of CIA officials were called into a Moscow meeting with officials from the FSB, the successor to the KGB. The message, delivered as a diplomatic démarche, was clear: Do not interfere with Kaspersky.

These stories still are almost certainly revealing just a fraction of the story. All ignore Kaspersky’s reports laying out US and allies’ spying tools (explaining why Israel might hack Kaspersky and share the details, if not the work). And the most logical explanation for the FSB démarche is that Kaspersky — as they said at the time — reported the hack to their relevant law enforcement agency, which is the FSB, who in turn yelled at the CIA.

None of that is to minimize the intrusiveness of Kaspersky’s software. It’s just to remind that the US does this stuff too, and like Russia, requires compliance from US based software companies (though recent court decisions have required compliance on data for the entire globe).

Which is something the NYT admits, but doesn’t detail.

The N.S.A. bans its analysts from using Kaspersky antivirus at the agency, in large part because the agency has exploited antivirus software for its own foreign hacking operations and knows the same technique is used by its adversaries.

Finally, one other thing that could be going on here: all these entities do piggyback hacks on each other, and in fact it’s the first thing most of their tools do when they breach targeted systems — look who else is already there so you can see what they’re stealing and usually take your own copy.

Which means it’s possible that Russia found the NSA files by piggybacking on Israel. Or vice versa. Or, it could be nothing more complex than FSB taking the files it found while it responded to the Kaspersky hack and using them themselves.

None of this yet explains where the Shadow Brokers’ tools came from (though I think the method may be similar). But I’ll return to that later this week.

PureVPN Doesn’t Need to Keep Logs Given How Many Google Keeps

There’s a cyber-stalking case in MA that has a lot of people questioning whether or not VPNs keep serial cyber-stalkers safe from the FBI. In it, Ryan Lin is accused of stalking a former roommate, referred to by the pseudonym Jennifer Smith in the affidavit, as well as conducting some bomb hoaxes and other incidences of stalking (if these accusations are true he’s a total shithole with severe control problems).

Because the affidavit in the case refers to tying Lin’s usage to several VPNs, it has been read to confirm that PureVPN, especially, has been keeping historic logs of users, contrary to their public claims. To be clear: you can never know whether a VPN is honest about keeping logs or not, and simply having a VPN on your computer might provide means of compromise (sort of like an anti-virus), that makes you more vulnerable. But I don’t think the affidavit, by itself (particularly with a great deal of the evidence in the case still hidden), confirms PureVPN is keeping logs. Rather, I think the account matching described in the affidavit says the FBI could have identified which VPNs Lin used via orders to Google, Facebook, and other tech companies, and using that, obtained a pen register on PureVPN collecting prospective traffic. I don’t think what is shown proves that FBI obtained historic logs (though it doesn’t disprove it either).

One thing to understand about this case is that Lin would have been the suspect right from the start, because his stalking started while he still lived with Smith, and intensified right after his roommates got him evicted. Plus, some of his stalking of Smith and others involved his real social media accounts. That means that, at a very early stage in this investigation, FBI would have been able to get all this information from Google and Facebook, which his victims knew he used.

A. The following information about the customers or subscribers of the Account:
1. Names (including subscriber names, user names, and screen names);
2. Addresses (including mailing addresses, residential addresses, business addresses, and e-mail addresses);
3. Local and long distance telephone connection records;
4. Records of session times and durations, and the temporarily assigned network addresses (such as Internet Protocol (“IP”) addresses) associated with those sessions;
5. Length of service (including start date) and types of service utilized;
6. Telephone or instrument numbers (including MAC addresses);
7. Other subscriber numbers or identities (including temporarily assigned network addresses and registration Internet Protocol (“IP”) addresses (including carrier grade natting addresses or ports)); and
8. Means and source of payment for such service (including any credit card or bank account number) and billing records.

B. All records and other information (not including the contents of communications) relating to the Account, including:
1. Records of user activity for each connection made to or from the Account, including log files; messaging logs; the date, time, length, and method of connections; data transfer volume; user names; and source and destination Internet Protocol addresses;
2. Information about each communication sent or received by the Account, including the date and time of the communication, the method of communication, and the source and destination of the communication (such as source and destination email addresses, IP addresses, and telephone numbers);
3. Records of any accounts registered with the same email address, phone number(s), method(s) of payment, or IP address as [] the accounts listed in Part 1; and Records of any accounts that are linked to either of the accounts listed in Part 1 by machine cookies (meaning all Google user IDs that logged into any Google account by the same machine as [] the accounts in Part 1). [my emphasis]

So very early in the investigation (almost certainly 2016), the FBI would have started obtaining every IP address that Lin was using to access Google and Facebook, and any accounts tied to the IP addresses used to log into his known accounts.

Instragram IDs WAN usage

Now consider the different references to VPNs in the affidavit. First, in February 2017, Lin registered a new Instagram account via WAN Security, one of the three VPNs listed.

February 2017: Lin registers Instagram account via WAN Security, also uses it to send email from [email protected] to local police department

That would mean that from the time FBI learned he used WAN to register with Instagram, the FBI would have known he used that service, and probably would have a very good idea which WAN server he default logged into.

Gmail ties WAN usage to other pseudonymous accounts

Then, FBI tracked April 2017 activity to connect Lin to an anonymous account at a service called Rover that he used to stalk people.

  • April 14, 2017, 14:55:52: Lin’s Gmail address accessed from IP address tied to WANSecurity server
  • April 14, 2017, 15:06:27: “Ashley Plano,” using [email protected], accessed Rover via same WANSecurity server
  • April 17, 2017, 21:54:25: “Ashley Plano” accesses Rover via Secure Internet server
  • April 17, 2017, 23:19:12: Lin’s Gmail address accessed via same Secure Internet server
  • April 18, 2017, 23:48:28: Lin’s Gmail address accessed via same Secure Internet server
  • April 19, 2017, 00:30:11: Ashley Plano account accessed via same Secure Internet server
  • April 24, 2017 (unspecified times): Lin’s Gmail and [email protected] email account accessed via same Secure Internet server

The WAN Security usage would have been accessible from Lin’s Gmail account (and would have been known since at least February). A subpoena to Rover after reports it was used for stalking would have likewise shown the WAN Security usage and times (assuming their logs are that detailed).

The Secure Internet use would have likewise shown up in his Gmail usage. Matching that to the Rover logs would have been the same process as with the WAN Security usage. And matching Lin’s known Gmail to his (alleged) pseudonymous teleportx email would have been done by Google itself, matching other accounts accessed by the IP Lin used (though they would have had to weed out other multiple Secure Internet server users).

In other words, this stuff could have come — and almost certainly did — from 2703(d) order returns available with a relevance standard, probably starting months before this activity.

Work computer confirms PureVPN usage, may provide account number

Then there’s this information, tying Lin’s work computer to PureVPN.

July 24, 2017: Lin fired by his unnamed software company employer — he asks, but is denied, to access his work computer to sign out of accounts

August 29, 2017: FBI agents find “Artifacts indicat[ing] that PureVPN, a VPN service that was used repeatedly in the cyberstalking scheme, was installed on the computer.”

What is not mentioned here is whether the “artifact” that showed Lin, like a fucking moron, loaded PureVPN onto his work computer also included him loading his PureVPN account number onto the computer. I think the vagueness here is intentional — both to keep the information from us and from Lin (at least until he signs a protection order). I also think this discussion, while useful for establishing probable cause to search his house, is also a feint. I suspect they already had Lin tied to PureVPN, and probably to a specific account there.

FBI’s not telling when and how they IDed Lin’s PureVPN usage, but Google would have had it

Which leads us to this language, which is the stuff that has everyone wigged out about PureVPN keeping logs.

Further, records from PureVPN show that the same email accounts–Lin’s gmail account and the teleportfx gmail account–were accessed from the same WANSecurity IP address. Significantly, PureVPN was able to determine that their service was accessed by the same customer from two originating IP addresses: the RCN IP address from the home Lin was living in at the time, and the software company where Lin was employed at the time.

[snip]

PureVPN also features prominently in the cyberstalking campaign, and the search of Lin’s workplace computer showed access of PureVPN.

Unlike almost every reference in this affidavit, there’s no date attached to this knowledge. It appears after the work computer language, leaving the impression that the knowledge came after the work computer access. But particularly since FBI alleges Lin used PureVPN for a lot of his stalking, they probably were looking at PureVPN much earlier.

One thing is certain: FBI could have easily IDed a known PureVPN server accessing Lin’s Gmail account and the teleportfx one FBI identified at least as early as April, months before finding PureVPN loaded onto his work computer.

The FBI doesn’t say which victims Lin accessed via PureVPN or when, only that it figured prominently. It does say, however, that PureVPN identified use from both Lin’s home and work addresses.

Most importantly, FBI doesn’t say when they asked PureVPN about all this. Nothing in this affidavit rules out the FBI serving PureVPN with a PRTT to track ongoing usage tied to Lin’s known accounts (rather than historical usage tied to them). Mind you, there’s nothing to rule out historical logs either (as the affidavit also notes, Lin at one point tweeted something indicating knowledge that VPNs will at least keep access information tied to users).

Here’s the thing, though: if you’re using the same Gmail account tied to the same home IP to access three different VPN providers, often on the same day, your VPN usage is going to be identified from Google’s extensive log keeping. It is an open question what the FBI can do with that knowledge once they have it — whether they can only collect prospective information or whether a provider is going to have some useful historical knowledge to share. But the FBI didn’t need historic logs from PureVPN to get to Lin.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

The Conflicting Homework Explanations in Three Kaspersky Stories

There are now three versions of the Kaspersky story from yesterday, reporting that a TAO employee brought files home from work and used them on his laptop running Kaspersky AV, which ultimately led to Russia getting the files. I’m interested in the three different explanations for why he brought the files home.

WSJ says he brought them home “possibly to continue working beyond his normal office hours.”

People familiar with the matter said he is thought to have purposely taken home numerous documents and other materials from NSA headquarters, possibly to continue working beyond his normal office hours.

WaPo (which has been reporting on this guy since last November) says he brought files he was working on to replace ones burned by Snowden.

The employee had taken classified material home to work on it on his computer,

[snip]

The material the employee took included hacking tools he was helping to develop to replace ­others that were considered compromised following the breach of NSA material by former contractor Edward Snowden, said one individual familiar with the matter.

NYT says he brought files home to refer to as he worked on his resume.

Officials believe he took the material home — an egregious violation of agency rules and the law — because he wanted to refer to it as he worked on his résumé

While the WSJ and WaPo stories don’t conflict, they are different, with the poignant detail that NSA lost hacking files even as it tried to replace Snowden ones.

Meanwhile, none of these stories say this guy got any punishment besides removal from his job (from all his jobs? does he still work for the US government?). And while the NYT says prosecutors in Maryland are “handling” his case, they don’t believe he has been charged.

While federal prosecutors in Maryland are handling the case, the agency employee who took the documents home does not appear to have been charged.

But all of these stories go way too easy on this guy, as compared to the way sources would treat any other person (aside from James Cartwright) caught improperly handling classified information. As the WSJ makes clear, Admiral Rogers — not this guy — was supposed to lose his job as a result of this breach.

Then-Defense Secretary Ash Carter and then-Director of National Intelligence James Clapper pushed President Barack Obama to remove Adm. Rogers as NSA head, due in part to the number of data breaches on his watch, according to several officials familiar with the matter.

So I suspect there is a more complex story about why he had these files at home, if that’s in fact what he did.

Remember, NSA’s hackers don’t launch attacks sitting in Fort Meade. They launch the attacks from some other location. Both Shadow Brokers

We find cyber weapons made by creators of stuxnet, duqu, flame. Kaspersky calls Equation Group. We follow Equation Group traffic. We find Equation Group source range. We hack Equation Group. We find many many Equation Group cyber weapons.

And WikiLeaks have said that’s how they got their US hacking files.

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

In other words, I suspect at least part of this story is an attempt to package this compromise (which is not the Shadow Brokers source, but may be the same method) in a way that doesn’t make the NSA look totally incompetent.

Update: In this thread, Jonathan Nichols points out that the Vulnerabilities Equities Process has a big loophole.

Vulnerabilities identified during the course of federally-sponsored open and unclassified research, whether in the public domain or at a government agency, FFRDC, National Lab, or other company doing work on behalf of the USG need not be put through the process. Information related to such vulnerabilities, however, does require notification to the Executive Secretariat, which shall notify process participants for purposes of general USG awareness.

That is, one way to avoid the VEP process altogether (and therefore potential notice to companies) is to conduct the research to develop the systems on unclassified systems. Which would be an especially big problem if you were running KAV.

Which might also explain why none of the stories explaining how this guy’s files got compromised make sense.

Kaspersky and the Third Major Breach of NSA’s Hacking Tools

The WSJ has a huge scoop that many are taking to explain why the US has banned Kaspersky software.

Some NSA contractor took some files home in (the story says) 2015 and put them on his home computer, where he was running Kaspersky AV. That led Kaspersky to discover the files. That somehow (the story doesn’t say) led hackers working for the Russian state to identify and steal the documents.

Hackers working for the Russian government stole details of how the U.S. penetrates foreign computer networks and defends against cyberattacks after a National Security Agency contractor removed the highly classified material and put it on his home computer, according to multiple people with knowledge of the matter.

The hackers appear to have targeted the contractor after identifying the files through the contractor’s use of a popular antivirus software made by Russia-based Kaspersky Lab, these people said.

The theft, which hasn’t been disclosed, is considered by experts to be one of the most significant security breaches in recent years. It offers a rare glimpse into how the intelligence community thinks Russian intelligence exploits a widely available commercial software product to spy on the U.S.

The incident occurred in 2015 but wasn’t discovered until spring of last year, said the people familiar with the matter.

The stolen material included details about how the NSA penetrates foreign computer networks, the computer code it uses for such spying and how it defends networks inside the U.S., these people said.

Having such information could give the Russian government information on how to protect its own networks, making it more difficult for the NSA to conduct its work. It also could give the Russians methods to infiltrate the networks of the U.S. and other nations, these people said.

Way down in the story, however, is this disclosure: US investigators believe Kaspersky’s AV identified the files, but isn’t sure whether Kaspersky told the Russian government.

U.S. investigators believe the contractor’s use of the software alerted Russian hackers to the presence of files that may have been taken from the NSA, according to people with knowledge of the investigation. Experts said the software, in searching for malicious code, may have found samples of it in the data the contractor removed from the NSA.

But how the antivirus system made that determination is unclear, such as whether Kaspersky technicians programed the software to look for specific parameters that indicated NSA material. Also unclear is whether Kaspersky employees alerted the Russian government to the finding.

Given the timing, it’s worth considering several other details about the dispute between the US and Kaspersky. (This was all written for another post that I’ll return to.)

The roots of Kaspersky’s troubles in 2015

Amid the reporting on Eugene Kaspersky’s potential visit to testify to Congress, Reuters reported the visit would be Kaspersky’s first visit to the US since spring 2015.

Kaspersky told NBC News in July that he was not currently traveling to the United States because he was “worried about some unexpected problems” if he did, citing the “ruined relationship” between Moscow and Washington.

Kaspersky Lab did not immediately respond when asked when its chief executive was last in the United States. A source familiar with U.S. inquiries into the company said he had not been to the United States since spring of 2015.

A link in that Reuters piece suggests Kaspersky’s concern dates back to August 2015 Reuters reporting, based off leaked emails and interviews with former Kaspersky employees, that suggests the anti-virus firm used fake files to trick its competitors into blocking legitimate files, all in an effort to expose their theft of Kaspersky’s work. A more recent reporting strand, again based on leaked emails, dates to the same 2009 time period and accuses Kaspersky of working with FSB (which in Russia, handles both spying and cybersecurity — though ostensibly again, that’s how the FBI works here).

But two events precede that reporting. In June 2015, Kaspersky revealed that it (and a bunch of locales where negotiations over the Iran deal took place) had been infected by Duqu 2.0, a thread related to StuxNet.

Kaspersky says the attackers became entrenched in its networks some time last year. For what purpose? To siphon intelligence about nation-state attacks the company is investigating—a case of the watchers watching the watchers who are watching them. They also wanted to learn how Kaspersky’s detection software works so they could devise ways to avoid getting caught. Too late, however: Kaspersky found them recently while testing a new product designed to uncover exactly the kind of attack the intruders had launched.

[snip]

Kaspersky is still trying to determine how much data the attackers stole. The thieves, as with the previous Duqu 2011 attack, embedded the purloined data inside blank image files to slip it out, which Raiu says “makes it difficult to estimate the volume of information that was actually transferred.” But at least, he says, it doesn’t appear that the attackers were out to infect Kaspersky customers through its networks or products. Kaspersky claims to have more than 400 million users worldwide.

Which brings us to what the presumed NSA hackers were looking for:

The attackers were primarily interested in Kaspersky’s work on APT nation-state attacks–especially with the Equation Group and Regin campaigns. Regin was a sophisticated spy tool Kaspersky found in the wild last year that was used to hack the Belgian telecom Belgacom and the European Commission. It’s believed to have been developed by the UK’s intelligence agency GCHQ.

The Equation Group is the name Kaspersky gave an attack team behind a suite of different surveillance tools it exposed earlier this year. These tools are believed to be the same ones disclosed in the so-called NSA ANT catalogue published in 2013 by journalists in Germany. The interest in attacks attributed to the NSA and GCHQ is not surprising if indeed the nation behind Duqu 2.0 is Israel.

Kaspersky released its Equation Group whitepaper in February 2015. It released its Regin whitepaper in November 2014.

One thing that I found particularly interesting in the Equation Group whitepaper — in re-reading it after ShadowBrokers released a bunch of Equation Group tools — is that the report offers very little explanation of how Kaspersky was able to find so many samples of the NSA malware that the report makes clear is almost impossible to find. The only explanation is this CD attack.

One such incident involved targeting participants at a scientific conference in Houston. Upon returning home, some of the participants received by mail a copy of the conference proceedings, together with a slideshow including various conference materials. The compromised CD-ROM used “autorun.inf” to execute an installer that began by attempting to escalate privileges using two known EQUATION group exploits. Next, it attempted to run the group’s DOUBLEFANTASY implant and install it onto the victim’s machine. The exact method by which these CDs were interdicted is unknown. We do not believe the conference organizers did this on purpose. At the same time, the super-rare DOUBLEFANTASY malware, together with its installer with two zero-day exploits, don’t end up on a CD by accident.

But none of the rest of the report explains how Kaspersky could have learned so much about NSA’s tools.

We now may have our answer: initial discovery of NSA tools led to further discovery using its AV tools to do precisely what they’re supposed to. If some NSA contractor delivered all that up to Kaspersky, it would explain the breadth of Kaspersky’s knowledge.

It would also explain why NSA would counter-hack Kaspersky using Duqu 2.0, which led to Kaspersky learning more about NSA’s tools.

So to sum up, Eugene Kaspersky’s reluctance to visit the US dates back to a period when 1) Kaspersky’s researchers released detailed analysis of some of NSA and GCHQ’s key tools, which seems to have led to 2) an NSA hack of Kaspersky, which in turn shortly preceded 3) some reporting based off unexplained emails floating accusations of unfair competition dating back to 2009 and earlier.

We now know all that came after Kaspersky found at least some of these tools sitting on some NSA contractor’s home laptop.

This still doesn’t explain how Russian hackers figured out precisely where Kaspersky was getting this information from — which is a real question, but not one the WSJ piece answers.

But reading those reports again, especially the Equation Group one, should make it clear how the Russian government could have discovered that Kaspersky had discovered these tools.

Putin Discovers He Needs to Indict Another Russian Hacker

Back when Russian hacker Yevgeniy Nikulin got arrested in Prague in association with US charges of hacking Linked in and DropBox, Russia quickly delivered up its own, far more minor indictment of him to set off a battle over his extradition. Months alter, Nikulin’s legal team publicized a claim that an FBI Agent had discussed a deal with him, related to the hack of the DNC — a claim that is not as nuts as it seems (because a number of the people hacked had passwords exposed in those breaches). Whatever the reason, Russia clearly would like to keep Nikulin out of US custody.

And not long after Russian hacker Alexander Vinnik got detained in Greece related to the Bitcoin-e charges, Russia dug up an indictment for him too. Russia has emphasized crypto-currencies of late, so it’s understandable why they’d want to keep a guy alleged to be an expert at using crypto-currencies to launder money out of US hands.

What’s a more interesting question is why Russia waited so long to manufacture a Russian indictment for Pyotr Levashov, the alleged culprit behind the Kelihos bot, who is currently facing extradition to the US from Spain. Levashov was detained in April, but Russia only claimed they wanted him, too, a few weeks ago, around the same time Levashov started claiming he had spied on behalf of Putin’s party.

Perhaps it’s harder to manufacture a Russian indictment on someone the state had had no problem with before. Perhaps Russia has just decided this ploy is working and has few downsides. Or perhaps other events — maybe the arrest of Marcus Hutchins in August or the extradition back to the UK of Daniel Kaye in September — have made Levashov’s exposure here in the US even more problematic for Russia.

But I find it really curious that it took five months after Levashov got arrested for the Russians to decide it’d be worth claiming they want to arrest him too.

Update: Spain has approved Levashov’s extradition to the US.

Facebook Anonymously Admits It IDed Guccifer 2.0 in Real Time

The headline of this story focuses on how Obama, in the weeks after the election, nine days before the White House declared the election, “free and fair from a cybersecurity perspective,” begged Mark Zuckerberg to take the threat of fake news seriously.

Now huddled in a private room on the sidelines of a meeting of world leaders in Lima, Peru, two months before Trump’s inauguration, Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously. Unless Facebook and the government did more to address the threat, Obama warned, it would only get worse in the next presidential race.

But 26 paragraphs later, WaPo reveals a detail that should totally change the spin of the article: in June, Facebook not only detected APT 28’s involvement in the operation (which I heard at the time), but also informed the FBI about it (which, along with the further details, I didn’t).

It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.

At the time, cybersecurity experts at the company were tracking a Russian hacker group known as APT28, or Fancy Bear, which U.S. intelligence officials considered an arm of the Russian military intelligence service, the GRU, according to people familiar with Facebook’s activities.

Members of the Russian hacker group were best known for stealing military plans and data from political targets, so the security experts assumed that they were planning some sort of espionage operation — not a far-reaching disinformation campaign designed to shape the outcome of the U.S. presidential race.

Facebook executives shared with the FBI their suspicions that a Russian espionage operation was in the works, a person familiar with the matter said. An FBI spokesperson had no immediate comment.

Soon thereafter, Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.

Like the U.S. government, Facebook didn’t foresee the wave of disinformation that was coming and the political pressure that followed. The company then grappled with a series of hard choices designed to shore up its own systems without impinging on free discourse for its users around the world. [my emphasis]

But the story doesn’t provide the details you would expect from such disclosures.

For example, where did Facebook see Guccifer 2.0? Did Guccifer 2.0 try to set up a Facebook account? Or, as sounds more likely given the description, did he/they use Facebook as a signup for the WordPress site?

More significantly, what did Facebook do with the DC Leaks account, described explicitly?

It seems Facebook identified, and — at least in the case of the DC Leaks case — shut down an APT 28 attempt to use its infrastructure. And it told FBI about it, at a time when the DNC was withholding its server from the FBI.

This puts this passage from Facebook’s April report, which I’ve pointed to repeatedly, in very different context.

Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.

In other words, Facebook had reached this conclusion back in June 2016, and told FBI about it, twice.

And then what happened?

Again, I’m sympathetic to the urge to blame Facebook for this election. But this article describes Facebook’s heavy handed efforts to serve as a wing of the government to police terrorist content, without revealing that sometimes Facebook has erred in censoring content that shouldn’t have been. Then, it reveals Facebook reported Guccifer 2.0 and DC Leaks to FBI, twice, with no further description of what FBI did with those leads.

Yet from all that, it headlines Facebook’s insufficient efforts to track down other abuses of the platform.

I’m not sure what the answer is. But it sounds like Facebook was more forthcoming with the FBI about APT 28’s efforts than the DNC was.

How to Read the DHS Targeted States Information

Yesterday, DHS informed the states that had their registration databases targeted by Russian hackers last year. There has been an outright panic about the news since states started revealing they got notice, so I thought it worthwhile to describe what we should take away from the notice and subsequent reporting:

  • “Most” of the 21 targeted states were not successfully hacked
  • Some targeted states were successfully hacked
  • Not all swing states were targeted, not all targeted states are swing states
  • These hacks generally do not involve vote tallying
  • These hacks do not involve hacking voting machines
  • These hacks do not involve other voter suppression methods — whether by GOP or Russians
  • Notice needs to improve

The AP has done good work tracking down which states got notice they were targeted, identifying the 21 targeted states. Those targeted states were:

  1. Alabama
  2. Alaska
  3. Arizona
  4. California
  5. Colorado
  6. Connecticut
  7. Delaware
  8. Florida
  9. Illinois
  10. Iowa
  11. Maryland
  12. Minnesota
  13. North Dakota
  14. Ohio
  15. Oklahoma
  16. Oregon
  17. Pennsylvania
  18. Texas
  19. Virginia
  20. Washington
  21. Wisconsin

 

“Most” of the 21 targeted states were not successfully hacked

This list of 21 states does not mean that Russians successfully hacked 21 states. All it means is Russians probed 21 states. And the AP says “most” were not successful. WI, WA, and MN have said the attacks on them were not successful.

Thus, for “most” of these states, the impact is the same as the reports that Russians were attempting, unsuccessfully, to phish engineers in the energy industry: it is cause for concern, but unless new intelligence becomes available, it means that for those “most” states these probes could not affect the election.

Some targeted states were successfully probed

Of course, by saying that “most” attacks were not successful, you’re admitting that “some” were. We only know IL and AZ to have successfully been breached.

This means this story may not be done yet: reporters, especially state based ones, are going to have to get their voting officials to provide details about the attacks and it may take some FOIA work.

Mind you, a successful hack still doesn’t mean that the election was affected (as I believe to be the understanding with respect to AZ, though there is more dispute about IL). It might be that the hackers just succeeded in getting into the database. It may be that they succeeded only in downloading the voter registration database — which in many states, is readily available, and which is nowhere near the most interesting available data for targeting in any case.

In my opinion, the most effective way to affect the outcome of the election via voter registration databases is not to download and use it for targeting, but instead, to alter the database, selectively eliminating or voiding the registration of voters in targeted precincts (which of course means the hackers would need to come in with some notion of targets). Even changing addresses would have the effect of creating lines at the polls.

Altering the database would have the same effect as an existing GOP tactic does. In many states, GOP secretaries of state very aggressively purge infrequent voters. Particularly for transient voters (especially students, but poorer voters are also more likely to move from year to year), a voter may not get notice they’ve been purged. This has the effect of ensuring that the purged voter cannot vote, and also has the effect of slowing the voting process for voters who are registered.  In other words, that’s the big risk here — that hackers will do things to make it impossible for some voters to vote, and harder for others to do so.

Not all swing states were targeted, not all targeted states are swing states

The list of targeted states is very curious. Some targeted states are obvious swing states — WI, PA, FL, and VA were four of the five states where the election was decided. But MI is not on there, and NC, another close state, is not either.

In addition, a lot of these states are solidly red, like AL and OK. A lot of them are equally solidly blue, like CA and CT. So if the Russians had a grand scheme here, it was not (just) to flip swing states.

These hacks generally do not involve vote tallying

DHS has said that these hacks do not involve vote tallying. That means these disclosed probes, even assuming they were successful, are not going to explain what may seem to be abnormalities in particular states’ tallies.

These hacks do not involve hacking voting machines

Nor do these hacks involve hacking voting machines (which is covered, in any case, by the denial that it involves vote tallying).

Yes, voting machines are incredibly vulnerable. Yes, it would be child’s play for a hacker — Russian or American — to hack individual voting machines. With limited exceptions, there been no real assessment of whether individual machines got hacked (though it’d generally be easier to affect a local race that way than the presidential).

These hacks do not involve other voter suppression methods — whether by GOP or Russians

This list of 21 targeted states does not represent the known universe of Russian voting-related hacking.

It does not, for example, include the targeting of voting infrastructure contractors, such as VR Systems (which Reality Winner faces prison for disclosing). There’s good reason to at least suspect that the VR Systems hack may have affected NC’s outcome by causing the most Democratic counties to shift to paper voting books, resulting in confusion and delays in those counties that didn’t exist in more Republican ones.

And they don’t include any Russian social media-related support or suppression, which we’re getting closer to having proof of right now.

Importantly, don’t forget that we know Republicans were engaging in all these techniques as well, with far better funding. Russians didn’t need to hack WI and NC given how much organized suppression of voters of color took place. Republican secretaries of state had the power to purge voters on trumped up excuses without engaging in any hacking.

Do not let the focus on Russian tampering distract from the far more effective Republican suppression.

Notice needs to be improved

Finally, the other big story about this is that some states only got notice they were targeted yesterday, some even after having partnered with DHS to assess their voting infrastructure.

DHS has used classification, in part, to justify this silence, which is an issue the Intelligence Committees are trying to address in next year’s authorization. But that’s particularly hard to justify that many of these same states have run elections since.

Mind you, we’re likely to see this debate move to the next level — to demanding that state officials disclose full details about their state’s infrastructure to citizens.

In any case, if we’re to be able to use democratic pressure to ensure the infrastructure of democracy gets better protected, we’re going to need more notice.

Can Congress — or Robert Mueller — Order Facebook to Direct Its Machine Learning?

The other day I pointed out that two articles (WSJ, CNN) — both of which infer that Robert Mueller obtained a probable cause search warrant on Facebook based off an interpretation that under Facebook’s privacy policy a warrant would be required — actually ignored two other possibilities. Without something stronger than inference, then, these articles do not prove Mueller got a search warrant (particularly given that both miss the logical step of proving that the things Facebook shared with Mueller count as content and not business records).

In response to that and to this column arguing that Facebook should provide more information, some of the smartest surveillance lawyers in the country discussed what kind of legal process would be required, but were unable to come to any conclusions.

Last night, WaPo published a story that made it clear Congress wanted far more than WSJ and CNN had suggested (which largely fell under the category of business records and the ads posted to targets, the latter of which Congress had been able to see but not keep). What Congress is really after is details about the machine learning Facebook used to identify the malicious activity identified in April and the ads described in its most recent report, to test whether Facebook’s study was thorough enough.

A 13-page “white paper” that Facebook published in April drew from this fuller internal report but left out critical details about how the Russian operation worked and how Facebook discovered it, according to people briefed on its contents.

Investigators believe the company has not fully examined all potential ways that Russians could have manipulated Facebook’s sprawling social media platform.

[snip]

Congressional investigators are questioning whether the Facebook review that yielded those findings was sufficiently thorough.

They said some of the ad purchases that Facebook has unearthed so far had obvious Russian fingerprints, including Russian addresses and payments made in rubles, the Russian currency.

Investigators are pushing Facebook to use its powerful data-crunching ability to track relationships among accounts and ad purchases that may not be as obvious, with the goal of potentially detecting subtle patterns of behavior and content shared by several Facebook users or advertisers.

Such connections — if they exist and can be discovered — might make clear the nature and reach of the Russian propaganda campaign and whether there was collusion between foreign and domestic political actors. Investigators also are pushing for fuller answers from Google and Twitter, both of which may have been targets of Russian propaganda efforts during the 2016 campaign, according to several independent researchers and Hill investigators.

“The internal analysis Facebook has done [on Russian ads] has been very helpful, but we need to know if it’s complete,” Schiff said. “I don’t think Facebook fully knows the answer yet.”

[snip]

In the white paper, Facebook noted new techniques the company had adopted to trace propaganda and disinformation.

Facebook said it was using a data-mining technique known as machine learning to detect patterns of suspicious behavior. The company said its systems could detect “repeated posting of the same content” or huge spikes in the volume of content created as signals of attempts to manipulate the platform.

The push to do more — led largely by Adam Schiff and Mark Warner (both of whom have gotten ahead of the evidence at times in their respective studies) — is totally understandable. We need to know how malicious foreign actors manipulate the social media headquartered in Schiff’s home state to sway elections. That’s presumably why Facebook voluntarily conducted the study of ads in response to cajoling from Warner.

But the demands they’re making are also fairly breathtaking. They’re demanding that Facebook use its own intelligence resources to respond to the questions posed by Congress. They’re also demanding that Facebook reveal those resources to the public.

Now, I’d be surprised (pleasantly) if either Schiff or Warner made such detailed demands of the NSA. Hell, Congress can’t even get NSA to count how many Americans are swept up under Section 702, and that takes far less bulk analysis than Facebook appears to have conducted. And Schiff and Warner surely would never demand that NSA reveal the extent of machine learning techniques that it uses on bulk data, even though that, too, has implications for privacy and democracy (America’s and other countries’). And yet they’re asking Facebook to do just that.

And consider how two laws might offer guidelines, but (in my opinion) fall far short of authorizing such a request.

There’s Section 702, which permits the government to oblige providers to provide certain data on foreign intelligence targets. Section 702’s minimization procedures even permit Congress to obtain data collected by the NSA for their oversight purposes.

Certainly, the Russian (and now Macedonian and Belarus) troll farms Congress wants investigated fall squarely under the definition of permissible targets under the Foreign Government certificate. But there’s no public record of NSA making a request as breathtaking as this one, that Facebook (or any other provider) use its own intelligence resources to answer questions the government wants answered. While the NSA does draw from far more data than most people understand (including, probably, providers’ own algorithms about individually targeted accounts), the most sweeping request we know of involves Yahoo scanning all its email servers for a signature.

Then there’s CISA, which permits providers to voluntarily share cyber threat indicators with the federal government, using these definitions:

(A) IN GENERAL.—Except as provided in subparagraph (B), the term “cybersecurity threat” means an action, not protected by the First Amendment to the Constitution of the United States, on or through an information system that may result in an unauthorized effort to adversely impact the security, availability, confidentiality, or integrity of an information system or information that is stored on, processed by, or transiting an information system.

(B) EXCLUSION.—The term “cybersecurity threat” does not include any action that solely involves a violation of a consumer term of service or a consumer licensing agreement.

(6) CYBER THREAT INDICATOR.—The term “cyber threat indicator” means information that is necessary to describe or identify—

(A) malicious reconnaissance, including anomalous patterns of communications that appear to be transmitted for the purpose of gathering technical information related to a cybersecurity threat or security vulnerability;

(B) a method of defeating a security control or exploitation of a security vulnerability;

(C) a security vulnerability, including anomalous activity that appears to indicate the existence of a security vulnerability;

(D) a method of causing a user with legitimate access to an information system or information that is stored on, processed by, or transiting an information system to unwittingly enable the defeat of a security control or exploitation of a security vulnerability;

(E) malicious cyber command and control;

(F) the actual or potential harm caused by an incident, including a description of the information exfiltrated as a result of a particular cybersecurity threat;

(G) any other attribute of a cybersecurity threat, if disclosure of such attribute is not otherwise prohibited by law; or

(H) any combination thereof.

Since January, discussions of Russian tampering have certainly collapsed Russia’s efforts on social media with their various hacks. Certainly, Russian abuse of social media has been treated as exploiting a vulnerability. But none of this language defining a cyber threat indicator envisions the malicious use of legitimate ad systems.

Plus, CISA is entirely voluntary. While Facebook thus far has seemed willing to be cajoled into doing these studies, that willingness might change quickly if they had to expose their sources and methods, just as NSA clams up every time you ask about their sources and methods.

Moreover, unlike the sharing provisions in 702 minimization procedures, I’m aware of no language in CISA that permits sharing of this information with Congress.

Mind you, part of the problem may be that we’ve got global companies that have sources and methods that are as sophisticated as those of most nation-states. And, inadequate as they are, Facebook is hypothetically subject to more controls than nation-state intelligence agencies because of Europe’s data privacy laws.

All that said, let’s be aware of what Schiff and Warner are asking for, however justified it may be from a investigative standpoint. They’re asking for things from Facebook that they, NSA’s overseers, have been unable to ask from NSA.

If we’re going to demand transparency on sources and methods, perhaps we should demand it all around?

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

The Domestic Communications NSA Won’t Reveal Are Almost Certainly Obscured Location Communications

The other day, I laid out the continuing fight between Director of National Intelligence Dan Coats and Senator Ron Wyden over the former’s unwillingness to explain why he can’t answer the question, “Can the government use FISA Act Section 702 to collect communications it knows are entirely domestic?” in unclassified form. As I noted, Coats is parsing the difference between “intentionally acquir[ing] any communication as to which the sender and all intended recipients are known at the time of acquisition to be located in the United States,” which Section 702 prohibits, and “collect[ing] communications [the government] knows are entirely domestic,” which this exchange and Wyden’s long history of calling out such things clearly indicates the government does.

As I noted, the earlier iteration of this debate took place in early June. Since then, we’ve gotten two sets of documents that all but prove that the entirely domestic communication the NSA refuses to tell us about involves communications that obscure their location, probably via Tor or VPNs.

Most Entirely Domestic Communications Collected Via Upstream Surveillance in 2011 Obscured Their Location

The first set of documents are those on the 2011 discussion about upstream collection liberated just recently by Charlie Savage. They show that in the September 7, 2011 hearing, John Bates told the government that he believed the collection of discrete communications the government had not examined in their sampling might also contain “about” communications that were entirely domestic. (PDF 113)

We also have this other category, in your random sampling, again, that is 9/10ths of the random sampling that was set aside as being discrete communications — 45,000 out of the 50,0000 — as to which our questioning has indicataed we have a concern that some of the about communications may actually have wholly domestic communications.

And I don’t think that you’ve really assessed that, either theoretically or by any actual examination of those particular transactions or communications. And I’m not indicating to you what I expect you to do, but I do have this concern that there are a fair number of wholly domestic communications in that category, and there’s nothing–you really haven’t had an opportunity to address that, but there’s nothing that has been said to date that would dissuade me from that conclusion. So I’m looking there for some convincing, if you will, assessment of why there are not wholly domestic communications with that body which is 9/10s of the random sample.

In a filing submitted two days later, the government tried to explain away the possibility this would include (many) domestic communications. (The discussion responding to this question starts at PDF 120.) First, the NSA used technical means to determine that 41,272 of the 45,359 communications in the sample were not entirely domestic. That left 4,087 communications, which the NSA was able to analyze in just 48 hours. Of those, the NSA found just 25 that were not to or from a tasked selector (meaning they were “abouts” or correlated identities, described as “potentially alternate accounts/addresses/identifiers for current NSA targets” in footnote 7, which may be the first public confirmation that NSA collects on correlated identifiers). NSA then did the same kind of analysis it does on the communications that it does as part of its pre-tasking determination that a target is located outside the US. This focused entirely on location data.

Notably, none of the reviewed transactions featured an account/address/identifier that resolved to the United States. Further, each of the 25 communications contained location information for at least one account/address/identifier such that NSA’s analysts were able assess [sic] that at least one communicant for each of these 25 communications was located outside of the United States. (PDF 121)

Note that the government here (finally) drops the charade that these are simply emails, discussing three kinds of collection: accounts (which could be both email and messenger accounts), addresses (which having excluded accounts would significantly include IP addresses), and identifiers. And they say that having identified an overseas location for the communication, NSA treats it as an overseas communication.

The next paragraph is even more remarkable. Rather than doing more analysis on those just 25 communications it effectively argues that because latency is bad, it’s safe to assume that any service that is available entirely within the US will be delivered to an American entirely within the US, and so those 25 communications must not be American.

Given the United States’ status as the “world’s premier electronic communications hub,” and further based on NSA’s knowledge of Internet routing patterns, the Government has already asserted that “the vast majority of communications between persons located in the United States are not routed through servers outside the United Staes.” See the Government’s June 1, 2011 Submission at 11. As a practical matter, it is a common business practice for Internet and web service providers alike to attempt to deliver their customers the best user experience possible by reducing latency and increasing capacity. Latency is determined in part by the geographical distance between the user and the server, thus, providers frequently host their services on servers close to their users, and users are frequently directed to the servers closest to them. While such practices are not absolute in any respect and are wholly contingent on potentially dynamic practices of particular service providers and users,9 if all parties to a communication are located in the United States and the required services are available in the United States, in most instances those communications will be routed by service providers through infrastructure wholly within the United States.

Amid a bunch of redactions (including footnote 9, which is around 16 lines long and entirely redacted), the government then claims that its IP filters would ensure that it wouldn’t pick up any of the entirely domestic exceptions to what I’ll call its “avoidance of latency” assumption and so these 25 communications are no biggie, from a Fourth Amendment perspective.

Of course, the entirety of this unredacted discussion presumes that all consumers will be working with providers whose goal is to avoid latency. None of the unredacted discussion admits that some consumers choose to accept some latency in order to obscure their location by routing it through one (VPN) or multiple (Tor) servers distant from their location, including servers located overseas.

For what it’s worth, I think the estimate Bates did on his own to come up with a number of these SCTs was high, in 2011. He guessed there would be 46,000 entirely domestic communications collected each year; by my admittedly rusty math, it appears it would be closer to 12,000 (25 / 50,000 comms in the sample = .05% of the total; .05% of the 11,925,000 upstream transactions in that 6 month period = 5,962, times 2 = roughly 12,000 a year). Still, it was a bigger part of the entirely domestic upstream collection than those collected as MCTs, and all those entirely domestic communications have been improperly back door searched in the interim.

Collyer claims to have ended “about” collection but admits upstream will still collect entirely domestic communications

Now, if that analysis done in 2011 were applicable to today’s collection, there shouldn’t be a way for the NSA to collect entirely domestic communications today. That’s because all of those 25 potentially domestic comms were described as “about” collection. Rosemary Collyer has, according to her IMO apparently imperfect understanding of upstream collection, shut down “about” collection. So that should have eliminated the possibility for entirely domestic collection via upstream, right?

Nope.

As she admits in her opinion, it will still be possible for the NSA to “acquire an MCT” (that is, bundled collection) “that contains a domestic communication.”

So there must be something that has changed since 2011 that would lead NSA to collect entirely domestic communications even if that communication didn’t include an “about” selector.

In 2014 Collyer enforced a practice that would expose Americans to 702 collection

Which brings me back to the practice approved in 2014 in which, according to providers newly targeted under the practice, “the communications of U.S. person will be collected as part of such surveillance.”

As I laid out in this post, in 2014 Thomas Hogan approved a change in the targeting procedures. Previously, all users of a targeted facility had to be foreign for it to qualify as a foreign target. But for some “limited” exception, Hogan for the first time permitted the NSA to collect on a facility even if Americans used that facility as well, along with the foreign targets.

The first revision to the NSA Targeting Procedures concerns who will be regarded as a “target” of acquisition or a “user” of a tasked facility for purposes of those procedures. As a general rule, and without exception under the NSA targeting procedures now in effect, any user of a tasked facility is regarded as a person targeted for acquisition. This approach has sometimes resulted in NSA’ s becoming obligated to detask a selector when it learns that [redacted]

The relevant revision would permit continued acquisition for such a facility.

It appears that Hogan agreed it would be adequate to weed out American communications after collection in post-task analysis.

Some months after this change, some providers got some directives (apparently spanning all three known certificates), and challenged them, though of course Collyer didn’t permit them to read the Hogan opinion approving the change.

Here’s some of what Collyer’s opinion enforcing the directives revealed about the practice.

Collyer’s opinion includes more of the provider’s arguments than the Reply did. It describes the Directives as involving “surveillance conducted on the servers of a U.S.-based provider” in which “the communications of U.S. person will be collected as part of such surveillance.” (29) It says [in Collyer’s words] that the provider “believes that the government will unreasonably intrude on the privacy interests of United States persons and persons in the United States [redacted] because the government will regularly acquire, store, and use their private communications and related information without a foreign intelligence or law enforcement justification.” (32-3) It notes that the provider argued there would be “a heightened risk of error” in tasking its customers. (12) The provider argued something about the targeting and minimization procedures “render[ed] the directives invalid as applied to its service.” (16) The provider also raised concerns that because the NSA “minimization procedures [] do not require the government to immediately delete such information[, they] do not adequately protect United States person.” (26)

[snip]

Collyer, too, says a few interesting things about the proposed surveillance. For example, she refers to a selector as an “electronic communications account” as distinct from an email — a rare public admission from the FISC that 702 targets things beyond just emails. And she treats these Directives as an “expansion of 702 acquisitions” to some new provider or technology.

Now, there’s no reason to believe this provider was involved in upstream collection. Clearly, they’re being asked to provide data from their own servers, not from the telecom backbone (in fact, I wonder whether this new practice is why NSA has renamed “PRISM” “downstream” collection).

But we know two things. First: the discrete domestic communications that got sucked up in upstream collection in 2011 appear to have obscured their location. And, there is now a means of collecting bundles of communications via upstream collection (assuming Collyer’s use of MCT here is correct, which it might not be) such that even communications involving no “about” collection would be swept up.

Again, the evidence is still circumstantial, but there is increasing evidence that in 2014 the NSA got approval to collect on servers that obscure location, and that that is the remaining kind of collection (which might exist under both upstream and downstream collection) that will knowingly be swept up under Section 702. That’s the collection, it seems likely, that Coats doesn’t want to admit.

The problems with permitting collection on location-obscured Americans

If I’m right about this, then there are three really big problems with this practice.

First, in 2011, location-obscuring servers would not themselves be targeted. Communications using such servers would only be collected (if the NSA’s response to Bates is to be believed) if they included an “about’ selector.

But it appears there is now some collection that specifically targets those location-obscuring servers, and knowingly collects US person communications along with whatever else the government is after. If that’s right, then it will affect far more than just 12,000 people a year.

That’s especially true given that a lot more people are using location-obscuring servers now than on October 3, 2011, when Bates issued his opinion. Tor usage in the US has gone from around 150,000 mean users a day to around 430,000 users.

And that’s just Tor. While fewer VPN users will consistently use overseas servers, sometimes it will happen for efficacy reasons and sometimes it will happen to access content that is unavailable in the US (like decent Olympics coverage).

In neither of Collyer’s opinions did she ask for the kind of numerical counts of people affected that Bates asked for in 2011. If 430,000 Americans a day are being exposed to this collection under the 2014 change, it represents a far bigger problem than the one Bates called a Fourth Amendment violation in 2011.

Finally, and perhaps most importantly, Collyer newly permitted back door searches on upstream collection, even though she knew that (for some reason) it would still collect US person communications. So not only could the NSA collect and hold location obscured US person communications, but those communications might be accessed (if they’re not encrypted) via back door searches that (with Attorney General approval) don’t require a FISA order (though Americans back door searched by NSA are often covered by FISA orders).

In other words, if I’m right about this, the NSA can use 702 to collect on Americans. And the NSA will be permitted to keep what they find (on a communication by communication basis) if they fall under four exceptions to the destruction requirement.

The government is, once again, fighting Congressional efforts to provide a count of how many Americans are getting sucked up in 702 (even though the documents liberated by Savage reveal that such a count wouldn’t take as long as the government keeps claiming). If any of this speculation is correct, it would explain the reluctance. Because once the NSA admits how much US person data it is collecting, it becomes illegal under John Bates’ 2010 PRTT order.

The (Thus Far) Flimsy Case for Republican Cooperation on Russian Targeting

A number of credulous people are reading this article this morning and sharing it, claiming it is a smoking gun supporting the case that Republicans helped the Russians target their social media, in spite of this line, six paragraphs in.

No evidence has emerged to link Kushner, Cambridge Analytica, or Manafort to the Russian election-meddling enterprise;

Not only is there not yet evidence supporting the claim that Republican party apparatchiks helped Russians target their social media activity, not only does the evidence thus far raise real questions about the efficacy of what Russia did (though that will likely change, especially once we learn more about other platforms), but folks arguing for assistance are ignoring already-public evidence and far more obvious means by which assistance might be obtained.

Don’t get me wrong. I’m acutely interested in the role of Cambridge Analytica, the micro-targeting company that melds Robert Mercer’s money with Facebook’s privatized spying (and was before it was fashionable). I first focused on Jared Kushner’s role in that process, which people are gleefully discovering now, back in May. I have repeatedly said that Facebook — which has been forthcoming about analyzing and sharing (small parts) of its data — and Twitter — which has been less forthcoming — and Google — which is still channeling Sargent Schultz — should be more transparent and have independent experts review their methodology. I’ve also been pointing out, longer than most, of the import of concentration among social media giants as a key vulnerability Russia exploited. I’m particularly interested in whether Russian operatives manipulated influencers — on Twitter, but especially in 4Chan — to magnify anti-Hillary hostility. We may find a lot of evidence that Russia had a big impact on the US election via social media.

But we don’t have that yet and people shooting off their baby cannons over the evidence before us and over mistaken interpretations about how Robert Mueller might get Facebook data are simply degrading the entire concept of evidence.

The first problem with these arguments is an issue of scale. I know a slew of articles have been written about how far $100K spent on Facebook ads go. Only one I saw dealt with scale, and even that didn’t do so by examining the full scale of what got spent in the election.

Hillary Clinton spent a billion dollars on losing last year. Of that billion, she spent tens of millions paying a 100-person digital media team and another $1 million to pay David Brock to harass people attacking Hillary on social media (see this and this for more on her digital team). And while you can — and I do, vociferously — argue she spent that money very poorly, paying pricey ineffective consultants and spending on ads in CA instead of MI, even the money she spent wisely drowns out the (thus far identified) Russian investment in fake Facebook ads. Sure, it’s possible we’ll learn Russians exploited the void in advertising left in WI and MI to sow Hillary loathing (though this is something Trump’s people have explicitly taken credit for), but we don’t have that yet.

The same is true on the other side, even accounting for all the free advertising the sensationalist press gave Trump. Sheldon Adelson spent $82 million last year, and it’s not like that money came free of demands about policy outcomes involving a foreign country. The Mercers spent millions too (and $25 million total for the election, though a lot of that got spent on Ted Cruz), even before you consider their long-term investments in Breitbart and Cambridge Analytica, the former of which is probably the most important media story from last year. Could $100K have an effect among all this money sloshing about? Sure. But by comparison it’d be tiny, particularly given the efficacy of the already established right wing noise machine backed by funding orders of magnitude larger than Russia’s spending.

Then there’s what we know thus far about how Russia spent that money. Facebook tells us (having done the kind of analysis that even the intelligence community can’t do) that these obviously fake ads weren’t actually focused primarily on the Presidential election.

  • The vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate.
  • Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.
  • About one-quarter of these ads were geographically targeted, and of those, more ran in 2015 than 2016.

That’s not to say sowing discord in the US has no effect, or even no effect on the election. But thus far, we don’t have evidence showing that Russia’s Facebook trolls were (primarily) affirmatively pushing for Trump (though their Twitter trolls assuredly were) or that the discord they fostered happened in states that decided the election.

Now consider what a lot of breathless reporting on actual Facebook ads have shown. There was the article showing Russia bought ads supporting an anti-immigrant rally in Twin Falls, ID. The ad in question showed that just four people claimed to attend this rally in the third most Republican state. Another article focused on ads touting events in Texas. While the numbers of attendees are larger, and Texas will go Democratic long before Idaho does, we’re still talking relatively modest events in a state that was not going to decide the election.

To show Russia’s Facebook spending had a measurable impact on last year’s election, you’d want to focus on MI, WI, PA, and other close states. There were surely closely targeted ads that, particularly in rural areas where the local press is defunct and in MI where there was little advertising (WI had little presidential advertising, but tons tied to the Senate race) where such social media had an important impact; thus far it’s not clear who paid for them, though (again, Trump’s campaign has boasted about doing just that).

Additionally, empiricalerror showed that a number of the identifiably Russian ads simply repurposed existing, American ads.

That’s not surprising, as the ads appear to follow (not lead) activities that happened on far right outlets, including both Breitbart and Infowars. As with the Gizmo that tracks what it claims are Russian linked accounts and thereby gets credulous journalists to claim campaigns obviously pushed by Americans are actually Russian plots, it seems Russian propaganda is following, not leading, the right wing noise machine.

So thus far what we’re seeing is the equivalent of throwing a few matches on top of the raging bonfire that is the well established, vicious, American-funded inferno of far right media. That’s likely to change, but that’s what we have thus far.

But as I said, all this ignores one other key point: We already have evidence of assistance on the election.

Except, it went the opposite direction from where everyone is looking, hunting for instances where Republicans helped Russians decide to buy ads in Idaho that riled up 4 people.

As I reminded a few weeks back, at a time when Roger Stone and (we now know) a whole bunch of other long-standing GOP rat-fuckers were reaching out to presumed Russian hackers in hopes of finding Hillary’s long lost hacked Clinton Foundation emails, Guccifer 2.0 was reaching out to journalists and others with close ties to Republicans to push the circulation of stolen DCCC documents.

That is, the persona believed to be a front for Russia was distributing documents on House races in swing states such that they might be used by Republican opponents. Some of that data could be used for targeting.

Now, I have no idea whether Russia would risk doing more without some figure like Guccifer 2.0 to provide deniability. That is, I have no idea whether Russia would go so far as take more timely and granular data about Democrats’ targeting decisions and share that with Republicans covertly (in any case, we are led to believe that data would be old, no fresher than mid-June). But we do know they were living in the Democrats’ respective underwear drawers for almost a year.

And Russia surely wouldn’t need a persona like Guccifer 2.0 if they were sharing stolen data within Russia. If the FSB stole targeting data during the 11 months they were in the DNC servers, they could easily share that data with the Internet Research Association (the troll farm the IC believes has ties to Russian intelligence) so IRA can target more effectively than supporting immigration rallies in Idaho Falls.

Which is a mistake made by many of the sources in the Vanity Fair article everyone keeps sharing, the assumption that the only possible source of targeting help had to be Republicans.

We already know the Russians had help: they got it by helping themselves to campaign data in Democratic servers. It’s not clear they would need any more. Nor, absent proof of more effective targeting, is there any reason to believe that the dated information they stole from the Democrats wouldn’t suffice to what we’ve seen them do. Plus, we’ve never had clear answers whether or not Russians weren’t burrowed into far more useful data in Democratic servers. (Again, I think Russia’s actions with influencers on social media, particularly via 4Chan, was far more extensive, but that has more to do with HUMINT than with targeting.)

So, again, I certainly think it’s possible we’ll learn, down the road, that Republicans helped Russians figure out where to place their ads. But we’re well short of having proof of that right now, and we do have proof that some targeting data was flowing in the opposite direction.

Update: This post deals with DB’s exposure of a FB campaign organizing events in FL, which gets us far closer to something of interest. Those events came in the wake of Guccifer 2.0 releasing FL-based campaign information.

image_print