Can Congress — or Robert Mueller — Order Facebook to Direct Its Machine Learning?

The other day I pointed out that two articles (WSJ, CNN) — both of which infer that Robert Mueller obtained a probable cause search warrant on Facebook based off an interpretation that under Facebook’s privacy policy a warrant would be required — actually ignored two other possibilities. Without something stronger than inference, then, these articles do not prove Mueller got a search warrant (particularly given that both miss the logical step of proving that the things Facebook shared with Mueller count as content and not business records).

In response to that and to this column arguing that Facebook should provide more information, some of the smartest surveillance lawyers in the country discussed what kind of legal process would be required, but were unable to come to any conclusions.

Last night, WaPo published a story that made it clear Congress wanted far more than WSJ and CNN had suggested (which largely fell under the category of business records and the ads posted to targets, the latter of which Congress had been able to see but not keep). What Congress is really after is details about the machine learning Facebook used to identify the malicious activity identified in April and the ads described in its most recent report, to test whether Facebook’s study was thorough enough.

A 13-page “white paper” that Facebook published in April drew from this fuller internal report but left out critical details about how the Russian operation worked and how Facebook discovered it, according to people briefed on its contents.

Investigators believe the company has not fully examined all potential ways that Russians could have manipulated Facebook’s sprawling social media platform.

[snip]

Congressional investigators are questioning whether the Facebook review that yielded those findings was sufficiently thorough.

They said some of the ad purchases that Facebook has unearthed so far had obvious Russian fingerprints, including Russian addresses and payments made in rubles, the Russian currency.

Investigators are pushing Facebook to use its powerful data-crunching ability to track relationships among accounts and ad purchases that may not be as obvious, with the goal of potentially detecting subtle patterns of behavior and content shared by several Facebook users or advertisers.

Such connections — if they exist and can be discovered — might make clear the nature and reach of the Russian propaganda campaign and whether there was collusion between foreign and domestic political actors. Investigators also are pushing for fuller answers from Google and Twitter, both of which may have been targets of Russian propaganda efforts during the 2016 campaign, according to several independent researchers and Hill investigators.

“The internal analysis Facebook has done [on Russian ads] has been very helpful, but we need to know if it’s complete,” Schiff said. “I don’t think Facebook fully knows the answer yet.”

[snip]

In the white paper, Facebook noted new techniques the company had adopted to trace propaganda and disinformation.

Facebook said it was using a data-mining technique known as machine learning to detect patterns of suspicious behavior. The company said its systems could detect “repeated posting of the same content” or huge spikes in the volume of content created as signals of attempts to manipulate the platform.

The push to do more — led largely by Adam Schiff and Mark Warner (both of whom have gotten ahead of the evidence at times in their respective studies) — is totally understandable. We need to know how malicious foreign actors manipulate the social media headquartered in Schiff’s home state to sway elections. That’s presumably why Facebook voluntarily conducted the study of ads in response to cajoling from Warner.

But the demands they’re making are also fairly breathtaking. They’re demanding that Facebook use its own intelligence resources to respond to the questions posed by Congress. They’re also demanding that Facebook reveal those resources to the public.

Now, I’d be surprised (pleasantly) if either Schiff or Warner made such detailed demands of the NSA. Hell, Congress can’t even get NSA to count how many Americans are swept up under Section 702, and that takes far less bulk analysis than Facebook appears to have conducted. And Schiff and Warner surely would never demand that NSA reveal the extent of machine learning techniques that it uses on bulk data, even though that, too, has implications for privacy and democracy (America’s and other countries’). And yet they’re asking Facebook to do just that.

And consider how two laws might offer guidelines, but (in my opinion) fall far short of authorizing such a request.

There’s Section 702, which permits the government to oblige providers to provide certain data on foreign intelligence targets. Section 702’s minimization procedures even permit Congress to obtain data collected by the NSA for their oversight purposes.

Certainly, the Russian (and now Macedonian and Belarus) troll farms Congress wants investigated fall squarely under the definition of permissible targets under the Foreign Government certificate. But there’s no public record of NSA making a request as breathtaking as this one, that Facebook (or any other provider) use its own intelligence resources to answer questions the government wants answered. While the NSA does draw from far more data than most people understand (including, probably, providers’ own algorithms about individually targeted accounts), the most sweeping request we know of involves Yahoo scanning all its email servers for a signature.

Then there’s CISA, which permits providers to voluntarily share cyber threat indicators with the federal government, using these definitions:

(A) IN GENERAL.—Except as provided in subparagraph (B), the term “cybersecurity threat” means an action, not protected by the First Amendment to the Constitution of the United States, on or through an information system that may result in an unauthorized effort to adversely impact the security, availability, confidentiality, or integrity of an information system or information that is stored on, processed by, or transiting an information system.

(B) EXCLUSION.—The term “cybersecurity threat” does not include any action that solely involves a violation of a consumer term of service or a consumer licensing agreement.

(6) CYBER THREAT INDICATOR.—The term “cyber threat indicator” means information that is necessary to describe or identify—

(A) malicious reconnaissance, including anomalous patterns of communications that appear to be transmitted for the purpose of gathering technical information related to a cybersecurity threat or security vulnerability;

(B) a method of defeating a security control or exploitation of a security vulnerability;

(C) a security vulnerability, including anomalous activity that appears to indicate the existence of a security vulnerability;

(D) a method of causing a user with legitimate access to an information system or information that is stored on, processed by, or transiting an information system to unwittingly enable the defeat of a security control or exploitation of a security vulnerability;

(E) malicious cyber command and control;

(F) the actual or potential harm caused by an incident, including a description of the information exfiltrated as a result of a particular cybersecurity threat;

(G) any other attribute of a cybersecurity threat, if disclosure of such attribute is not otherwise prohibited by law; or

(H) any combination thereof.

Since January, discussions of Russian tampering have certainly collapsed Russia’s efforts on social media with their various hacks. Certainly, Russian abuse of social media has been treated as exploiting a vulnerability. But none of this language defining a cyber threat indicator envisions the malicious use of legitimate ad systems.

Plus, CISA is entirely voluntary. While Facebook thus far has seemed willing to be cajoled into doing these studies, that willingness might change quickly if they had to expose their sources and methods, just as NSA clams up every time you ask about their sources and methods.

Moreover, unlike the sharing provisions in 702 minimization procedures, I’m aware of no language in CISA that permits sharing of this information with Congress.

Mind you, part of the problem may be that we’ve got global companies that have sources and methods that are as sophisticated as those of most nation-states. And, inadequate as they are, Facebook is hypothetically subject to more controls than nation-state intelligence agencies because of Europe’s data privacy laws.

All that said, let’s be aware of what Schiff and Warner are asking for, however justified it may be from a investigative standpoint. They’re asking for things from Facebook that they, NSA’s overseers, have been unable to ask from NSA.

If we’re going to demand transparency on sources and methods, perhaps we should demand it all around?

The (Thus Far) Flimsy Case for Republican Cooperation on Russian Targeting

A number of credulous people are reading this article this morning and sharing it, claiming it is a smoking gun supporting the case that Republicans helped the Russians target their social media, in spite of this line, six paragraphs in.

No evidence has emerged to link Kushner, Cambridge Analytica, or Manafort to the Russian election-meddling enterprise;

Not only is there not yet evidence supporting the claim that Republican party apparatchiks helped Russians target their social media activity, not only does the evidence thus far raise real questions about the efficacy of what Russia did (though that will likely change, especially once we learn more about other platforms), but folks arguing for assistance are ignoring already-public evidence and far more obvious means by which assistance might be obtained.

Don’t get me wrong. I’m acutely interested in the role of Cambridge Analytica, the micro-targeting company that melds Robert Mercer’s money with Facebook’s privatized spying (and was before it was fashionable). I first focused on Jared Kushner’s role in that process, which people are gleefully discovering now, back in May. I have repeatedly said that Facebook — which has been forthcoming about analyzing and sharing (small parts) of its data — and Twitter — which has been less forthcoming — and Google — which is still channeling Sargent Schultz — should be more transparent and have independent experts review their methodology. I’ve also been pointing out, longer than most, of the import of concentration among social media giants as a key vulnerability Russia exploited. I’m particularly interested in whether Russian operatives manipulated influencers — on Twitter, but especially in 4Chan — to magnify anti-Hillary hostility. We may find a lot of evidence that Russia had a big impact on the US election via social media.

But we don’t have that yet and people shooting off their baby cannons over the evidence before us and over mistaken interpretations about how Robert Mueller might get Facebook data are simply degrading the entire concept of evidence.

The first problem with these arguments is an issue of scale. I know a slew of articles have been written about how far $100K spent on Facebook ads go. Only one I saw dealt with scale, and even that didn’t do so by examining the full scale of what got spent in the election.

Hillary Clinton spent a billion dollars on losing last year. Of that billion, she spent tens of millions paying a 100-person digital media team and another $1 million to pay David Brock to harass people attacking Hillary on social media (see this and this for more on her digital team). And while you can — and I do, vociferously — argue she spent that money very poorly, paying pricey ineffective consultants and spending on ads in CA instead of MI, even the money she spent wisely drowns out the (thus far identified) Russian investment in fake Facebook ads. Sure, it’s possible we’ll learn Russians exploited the void in advertising left in WI and MI to sow Hillary loathing (though this is something Trump’s people have explicitly taken credit for), but we don’t have that yet.

The same is true on the other side, even accounting for all the free advertising the sensationalist press gave Trump. Sheldon Adelson spent $82 million last year, and it’s not like that money came free of demands about policy outcomes involving a foreign country. The Mercers spent millions too (and $25 million total for the election, though a lot of that got spent on Ted Cruz), even before you consider their long-term investments in Breitbart and Cambridge Analytica, the former of which is probably the most important media story from last year. Could $100K have an effect among all this money sloshing about? Sure. But by comparison it’d be tiny, particularly given the efficacy of the already established right wing noise machine backed by funding orders of magnitude larger than Russia’s spending.

Then there’s what we know thus far about how Russia spent that money. Facebook tells us (having done the kind of analysis that even the intelligence community can’t do) that these obviously fake ads weren’t actually focused primarily on the Presidential election.

  • The vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate.
  • Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.
  • About one-quarter of these ads were geographically targeted, and of those, more ran in 2015 than 2016.

That’s not to say sowing discord in the US has no effect, or even no effect on the election. But thus far, we don’t have evidence showing that Russia’s Facebook trolls were (primarily) affirmatively pushing for Trump (though their Twitter trolls assuredly were) or that the discord they fostered happened in states that decided the election.

Now consider what a lot of breathless reporting on actual Facebook ads have shown. There was the article showing Russia bought ads supporting an anti-immigrant rally in Twin Falls, ID. The ad in question showed that just four people claimed to attend this rally in the third most Republican state. Another article focused on ads touting events in Texas. While the numbers of attendees are larger, and Texas will go Democratic long before Idaho does, we’re still talking relatively modest events in a state that was not going to decide the election.

To show Russia’s Facebook spending had a measurable impact on last year’s election, you’d want to focus on MI, WI, PA, and other close states. There were surely closely targeted ads that, particularly in rural areas where the local press is defunct and in MI where there was little advertising (WI had little presidential advertising, but tons tied to the Senate race) where such social media had an important impact; thus far it’s not clear who paid for them, though (again, Trump’s campaign has boasted about doing just that).

Additionally, empiricalerror showed that a number of the identifiably Russian ads simply repurposed existing, American ads.

That’s not surprising, as the ads appear to follow (not lead) activities that happened on far right outlets, including both Breitbart and Infowars. As with the Gizmo that tracks what it claims are Russian linked accounts and thereby gets credulous journalists to claim campaigns obviously pushed by Americans are actually Russian plots, it seems Russian propaganda is following, not leading, the right wing noise machine.

So thus far what we’re seeing is the equivalent of throwing a few matches on top of the raging bonfire that is the well established, vicious, American-funded inferno of far right media. That’s likely to change, but that’s what we have thus far.

But as I said, all this ignores one other key point: We already have evidence of assistance on the election.

Except, it went the opposite direction from where everyone is looking, hunting for instances where Republicans helped Russians decide to buy ads in Idaho that riled up 4 people.

As I reminded a few weeks back, at a time when Roger Stone and (we now know) a whole bunch of other long-standing GOP rat-fuckers were reaching out to presumed Russian hackers in hopes of finding Hillary’s long lost hacked Clinton Foundation emails, Guccifer 2.0 was reaching out to journalists and others with close ties to Republicans to push the circulation of stolen DCCC documents.

That is, the persona believed to be a front for Russia was distributing documents on House races in swing states such that they might be used by Republican opponents. Some of that data could be used for targeting.

Now, I have no idea whether Russia would risk doing more without some figure like Guccifer 2.0 to provide deniability. That is, I have no idea whether Russia would go so far as take more timely and granular data about Democrats’ targeting decisions and share that with Republicans covertly (in any case, we are led to believe that data would be old, no fresher than mid-June). But we do know they were living in the Democrats’ respective underwear drawers for almost a year.

And Russia surely wouldn’t need a persona like Guccifer 2.0 if they were sharing stolen data within Russia. If the FSB stole targeting data during the 11 months they were in the DNC servers, they could easily share that data with the Internet Research Association (the troll farm the IC believes has ties to Russian intelligence) so IRA can target more effectively than supporting immigration rallies in Idaho Falls.

Which is a mistake made by many of the sources in the Vanity Fair article everyone keeps sharing, the assumption that the only possible source of targeting help had to be Republicans.

We already know the Russians had help: they got it by helping themselves to campaign data in Democratic servers. It’s not clear they would need any more. Nor, absent proof of more effective targeting, is there any reason to believe that the dated information they stole from the Democrats wouldn’t suffice to what we’ve seen them do. Plus, we’ve never had clear answers whether or not Russians weren’t burrowed into far more useful data in Democratic servers. (Again, I think Russia’s actions with influencers on social media, particularly via 4Chan, was far more extensive, but that has more to do with HUMINT than with targeting.)

So, again, I certainly think it’s possible we’ll learn, down the road, that Republicans helped Russians figure out where to place their ads. But we’re well short of having proof of that right now, and we do have proof that some targeting data was flowing in the opposite direction.

Update: This post deals with DB’s exposure of a FB campaign organizing events in FL, which gets us far closer to something of interest. Those events came in the wake of Guccifer 2.0 releasing FL-based campaign information.

Twitter Asked to Tell Reality Winner the FBI Had Obtained Her Social Media Activity

Last week, the Augusta Chronicle reported that the government had unsealed notice that it had obtained access to Reality Winner’s phone and social media metadata. Altogether, the government obtained metadata from her AT&T cell phone, two Google accounts, her Facebook and Instagram accounts, and her Twitter account. Of those providers, it appears that only Twitter asked to tell Winner the government had obtained that information. The government obtained the 2703(d) order on June 13. On June 26, Twitter asked the FBI to rescind the non-disclosure order. In response, FBI got a 180-day deadline on lifting the gag; then on August 31, the FBI asked the court to unseal the order for Twitter, as well as the other providers.

The applications all include this language on Winner’s use of Tor, and more details about using a thumb drive with a computer last November.

During the search of her home, agents found spiral-bound notebooks in which the defendant had written information about setting up a single-use “burner” email account, downloading the TOR darkweb browser at its highest security setting, and unlocking a cell phone to enable the removal and replacement of its SIM card. Agents also learned, and the defendant admitted, that the defendant had inserted a thumb drive into a classified computer in November 2016, while on active duty with the U.S. Air Force and holding a Top Secret/SCI clearance. The defendant claimed to have thrown the thumb drive away in November 2016, and agents have not located the thumb drive.

Given that the FBI applied for and eventually unsealed the orders in all these cases, it provides a good way to compare what the FBI asks for from each provider — which gives you a sense of how the FBI actually uses these metadata requests to get a comprehensive picture of all the aliases, including IP addresses, someone might use. The MAC and IP addresses, in particular, would be very valuable to identify any of her otherwise unidentified device and Internet usage. Note, too, that AT&T gets asked to share all details of wire communications sent using the phone — so any information, including cell tower location, an app shares with AT&T would be included in that. AT&T, of course, tends to interpret surveillance requests broadly.

Though note: the prosecutor here pretty obviously cut and paste from the Google request for the social media companies, given that she copied over the Google language on cookies in her Twitter request.

AT&T

AT&T Corporation is required to disclose the following records and other information, if available, to the United States for each Account listed in Part I of this Attachment, for the time period beginning June 1, 2016, through and including June 7, 2017:

A. The following information about the customers or subscribers of the Account:
1. Names (including subscriber names, user names, and screen names);
2. Addresses (including mailing addresses, residential addresses, business addresses, and e-mail addresses);
3. Local and long distance telephone connection records;
4. Records of session times and durations, and the temporarily assigned network addresses (such as Internet Protocol (“IP”) addresses) associated with those sessions;
5. Length of service (including start date) and types of service utilized;
6. Telephone or instrument numbers (including MAC addresses. Electronic Serial Numbers (“ESN”), Mobile Electronic Identity Numbers (“MEIN”), Mobile Equipment Identifier (“MEID”), Mobile Identification Numbers (“MIN”), Subscriber Identity Modules (“SIM”), Mobile Subscriber Integrated Services Digital Network Number (“MSISDN”), International Mobile Subscriber Identifiers (“IMSl”), or International Mobile Equipment Identities (“IMEI”));
7. Other subscriber numbers or identities (including the registration Internet Protocol (“IP”) address); and
8. Means and source of payment for such service (including any credit card or bank account number) and billing records.

B. All records and other information (not including the contents of communications) relating to wire and electronic communications sent from or received by the Account, including the date and time of the communication, the method of communication, and the source and destination of the communication (such as source and destination email addresses, IP addresses, and telephone numbers), and including information regarding the cell towers and sectors through which the communications were sent or received.

Records of any accounts registered with the same email address, phone number(s), or method(s) of payment as the account listed in Part I.

Google

Google is required to disclose the following records and other information, if available, to the United States for each account or identifier listed in Part 1 of this Attachment (“Account”), for the time period beginning June 1, 2016, through and including June 7,2017:

A. The following information about the customers or subscribers of the Account:
1. Names (including subscriber names, user names, and screen names);
2. Addresses (including mailing addresses, residential addresses, business addresses, and e-mail addresses);
3. Local and long distance telephone connection records;
4. Records of session times and durations, and the temporarily assigned network addresses (such as Internet Protocol (“IP”) addresses) associated with those sessions;
5. Length of service (including start date) and types of service utilized;
6. Telephone or instrument numbers (including MAC addresses);
7. Other subscriber numbers or identities (including temporarily assigned network addresses and registration Internet Protocol (“IP”) addresses (including carrier grade natting addresses or ports)); and
8. Means and source of payment for such service (including any credit card or bank account number) and billing records.

B. All records and other information (not including the contents of communications) relating to the Account, including:
1. Records of user activity for each connection made to or from the Account, including log files; messaging logs; the date, time, length, and method of connections; data transfer volume; user names; and source and destination Internet Protocol addresses;
2. Information about each communication sent or received by the Account, including the date and time of the communication, the method of communication, and the source and destination of the communication (such as source and destination email addresses, IP addresses, and telephone numbers);
3. Records of any accounts registered with the same email address, phone number(s), method(s) of payment, or IP address as either of the accounts listed in Part 1; and Records of any accounts that are linked to either of the accounts listed in Part 1 by machine cookies (meaning all Google user IDs that logged into any Google account by the same machine as either of the accounts in Part

Facebook/Instagram

Facebook, Inc. is required to disclose tbe following records and other information, if available, to the United States for each account or identifier listed in Part 1 of this Attachment (“Account”),
for the time period beginning June 1, 2016, through and including June 7, 2017:

A. The following information about the customers or subscribers of the Account:
1. Names (including subscriber names, user names, and screen names);
2. Addresses (including mailing addresses, residential addresses, business addresses, and e-mail addresses);
3. Local and long distance telephone connection records;
4. Records of session times and durations, and the temporarily assigned network addresses (such as Intemet Protocol (“IP”) addresses) associated with those sessions;
5. Length of service (including start date) and types of service utilized;
6. Telephone or instrument numbers (including MAC addresses);
7. Other subscriber numbers or identities (including temporarily assigned network addresses and registration Intemet Protocol (“IP”) addresses (including carrier grade natting addresses or ports)); and
8. Means and source of payment for such service (including any credit card or bank account number) and billing records.

B. All records and other information (not including the contents of communications) relating to the Account, including:
1. Records of user activity for each connection made to or from the Account, including log files; messaging logs; the date, time, length, and method of connections; data transfer volume; user names; and source and destination Intemet Protocol addresses;
2. Information about each communication sent or received by tbe Account, including tbe date and time of the communication, the method of communication, and the source and destination of the communication (such as source and destination email addresses, IP addresses, and telephone numbers). Records of any accounts registered with the same email address, phone number(s), method(s) of payment, or IP address as either of the accounts listed in Part I; and
3. Records of any accounts that are linked to either of the accounts listed in Part I by machine cookies (meaning all Facebook/Instagram user IDs that logged into any Facebook/Instagram account by the same machine as either of the accounts in Part I).

Twitter

Twitter, Inc. is required to disclose the following records and other information, if available, to the United States for each account or identifier listed in Part 1 of this Attachment (“Account”), for the time period beginning June 1,2016, through and including June 7,2017:

A. The following information about the customers or subscribers of the Account:
1. Names (including subscriber names, user names, and screen names);
2. Addresses (including mailing addresses, residential addresses, business addresses, and e-mail addresses);
3. Local and long distance telephone connection records;
4. Records of session times and durations, and the temporarily assigned network addresses (such as Internet Protocol (“IP”) addresses) associated with those sessions;
5. Length of service (including start date) and types of service utilized;
6. Telephone or instrument numbers (including MAC addresses);
7. Other subscriber numbers or identities (including temporarily assigned network addresses and registration Internet Protocol (“IP”) addresses (including carrier grade natting addresses or ports)); and
8. Means and source of payment for such service (including any credit card or bank account number) and billing records.

B. All records and other information (not including the contents of communications) relating to the Account, including:
1. Records of user activity for each connection made to or from the Account, including log files; messaging logs; the date, time, length, and method of connections; data transfer volume; user names; and source and destination Internet Protocol addresses;
2. Information about each communication sent or received by the Account, including the date and time of the communication, the method of communication, and the source and destination of the communication (such as source and destination email addresses, IP addresses, and telephone numbers).
3. Records of any accounts registered with the same email address, phone number(s), method(s) of payment, or IP address the account listed in Part I; and
4. Records of any accounts that are linked to the account listed in Part I by machine cookies (meaning all Google [sic] user IDs that logged into any Google [sic] account by the same machine as the account in Part I).

Facebook’s Global Data: A Parallel Intelligence Source Rivaling NSA

In April, Facebook released a laudable (if incredible) report on Russian influence operations on Facebook during the election; the report found that just .1% of what got shared in election related activity go shared by malicious state-backed actors.

Facebook conducted research into overall civic engagement during this time on the platform, and determined that the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election.

[snip]

The reach of the content spread by these accounts was less than one-tenth of a percent of the total reach of civic content on Facebook.

Facebook also rather coyly confirmed they had reached the same conclusion the Intelligence Community had about Russia’s role in tampering with the election.

Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.

While skeptics haven’t considered this coy passage (and Facebook certainly never called attention to it), it means a second entity with access to global data — like the NSA but private — believes Russia was behind the election tampering.

Yesterday, Facebook came out with another report, quantifying how many ads came from entities that might be Russian information operations. They searched for two different things. First, ads from obviously fake accounts. They found 470 inauthentic accounts paid for 3,000 ads costing $100,000. But most of those didn’t explicitly discuss a presidential candidate, and more of the geo-targeted ones appeared in 2015 than in 2016.

  • The vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate.
  • Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.
  • About one-quarter of these ads were geographically targeted, and of those, more ran in 2015 than 2016.
  • The behavior displayed by these accounts to amplify divisive messages was consistent with the techniques mentioned in the white paper we released in April about information operations.

Elsewhere Facebook has said some or all of these are associated with a troll farm, the Internet Research Agency, in Petersburg.

The Intelligence Community Report on the Russia hacks specifically mentioned the Internet Research Agency — suggesting it probably had close ties to Putin. But it also suggested there was significant advertising that was explicitly pro-Trump, which may be inconsistent with Facebook’s observation that the majority of these ads ran policy, rather than candidate ads.

Russia used trolls as well as RT as part of its influence efforts to denigrate Secretary Clinton. This effort amplified stories on scandals about Secretary Clinton and the role of WikiLeaks in the election campaign.

  • The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close Putin ally with ties to Russian intelligence.
  • A journalist who is a leading expert on the Internet Research Agency claimed that some social media accounts that appear to be tied to Russia’s professional trolls—because they previously were devoted to supporting Russian actions in Ukraine—started to advocate for President-elect Trump as early as December 2015.

The other thing Facebook did was measure how many ads that might have originated in Russia without mobilizing an obviously fake account. That added another $50,000 in advertising to the pot of potential Russian disinformation.

In this latest review, we also looked for ads that might have originated in Russia — even those with very weak signals of a connection and not associated with any known organized effort. This was a broad search, including, for instance, ads bought from accounts with US IP addresses but with the language set to Russian — even though they didn’t necessarily violate any policy or law. In this part of our review, we found approximately $50,000 in potentially politically related ad spending on roughly 2,200 ads.

Still, that’s not all that much — it may explain why Facebook found only .1% of activity was organized disinformation.

In its report, Facebook revealed that it had shared this information with those investigating the election.

We have shared our findings with US authorities investigating these issues, and we will continue to work with them as necessary.

Subsequent reporting has made clear that includes Congressional Committees and Robert Mueller’s team. I’m curious whether Mueller made the request (whether using legal process or no), and Facebook took it upon themselves to share the topline data publicly. If so, we should be asking where the results of similar requests to Twitter and Google are.

I’m interested in this data — though I agree with both those that argue we need to make sure this advertising gets reviewed in campaign regulations, and those who hope independent scholars can review and vet Facebook’s methodology. But I’m as interested that we’re getting it.

Facebook isn’t running around bragging about this; if too many people groked it, more and more might stop using Facebook. But what these two reports from Facebook both reflect is the global collection of intelligence. The intelligence is usually used to sell highly targeted advertisements. But in the wake of Russia’s tampering with last year’s election, Facebook has had the ability to take a global view of what occurred. Arguably, it has shared more of that intelligence than the IC has, and in the specific detail regarding whether Internet Research Agency focused more on Trump or on exacerbating racial divisions in the country, it has presented somewhat different results than the IC has.

So in addition to observing (and treating just as skeptically as we would data from the NSA) the data Facebook reports, we would do well to recognize that we’re getting reports from a parallel global intelligence collector.

How the “Fake News” Panic Fed Breitbart

In just about every piece I wrote on “fake news” in the last year, I argued that the most influential piece of fake news of the campaign came not from the Russians, but instead from Fox News, in the form of Bret Baier’s early November “scoop” that Hillary Clinton would soon be indicted.

I was partly wrong about that claim. But substantially correct.

That’s the conclusion drawn by a report released by Harvard’s Berkman Klein Center last week. The report showed that the key dynamic behind Trump’s win came from the asymmetric polarization of our media sphere, embodied most dramatically in the way that Breitbart not only created a bubble for conservatives, but affected the overall agenda of the press, particularly with immigration (a conclusion that is all the more important given Steve Bannon’s return to Breitbart just as white supremacist protests gather in intensity).

So I was correct that the most important fake news was coming from right wing sites. I just pointed to Fox News, instead of the increasingly dominant Breitbart (notably, while Baier retracted his indictment claim, Breitbart didn’t stop magnifying it).

Here’s what the report had to say about the “fake news” that many liberals instead focused on.

Our data suggest that the “fake news” framing of what happened in the 2016 campaign, which received much post-election attention, is a distraction. Moreover, it appears to reinforce and buy into a major theme of the Trump campaign: that news cannot be trusted. The wave of attention to fake news is grounded in a real phenomenon, but at least in the 2016 election it seems to have played a relatively small role in the overall scheme of things. We do indeed find stories in our data set that come from sites, like Ending the Fed, intended as political clickbait to make a profit from Facebook, often with no real interest in the political outcome. But while individual stories may have succeeded in getting attention, these stories are usually of tertiary significance. In a scan of the 100 most shared stories in our Twitter and Facebook sets, the most widely shared fake news stories (in this sense of profit-driven Facebook clickbait) were ranked 66th and 55th by Twitter and Facebook shares, respectively, and on both Twitter and Facebook only two of the top 100 stories were from such sites. Out of two million stories, that may seem significant, but in the scheme of an election, it seems more likely to have yielded returns to its propagators than to have actually swayed opinions in significant measure. When we look at our data week by week, prominent fake news stories of this “Macedonian” type are rare and were almost never among the most significant 10 or 20 stories of the week, much less the election as a whole. Disinformation and propaganda from dedicated partisan sites on both sides of the political divide played a much greater role in the election. It was more rampant, though, on the right than on the left, as it took root in the dominant partisan media on the right, including Breitbart, Daily Caller, and Fox News. Moreover, the most successful examples of these political clickbait stories are enmeshed in a network of sites that have already created, circulated, and validated a set of narrative lines and tropes familiar within their network. The clickbait sites merely repackage and retransmit these already widely shared stories. We document this dynamic for one of the most successful such political clickbait stories, published by Ending the Fed, in the last chapter of this report, and we put it in the context of the much more important role played by Breitbart, Fox News, and the Daily Caller in reorienting the public conversation after the Democratic convention around the asserted improprieties associated with the Clinton Foundation.

Our observations suggest that fixing the American public sphere may be much harder than we would like. One feature of the more widely circulated explanations of our “post-truth” moment—fake news sites seeking Facebook advertising, Russia engaging in a propaganda war, or information overload leading confused voters to fail to distinguish facts from false or misleading reporting—is that these are clearly inconsistent with democratic values, and the need for interventions to respond to them is more or less indisputable. If profit-driven fake news is the problem, solutions like urging Facebook or Google to use technical mechanisms to identify fake news sites and silence them by denying them advertising revenue or downgrading the visibility of their sites seem, on their face, not to conflict with any democratic values. Similarly, if a foreign power is seeking to influence our democratic process by propagandistic means, then having the intelligence community determine how this is being done and stop it is normatively unproblematic. If readers are simply confused, then developing tools that will feed them fact-checking metrics while they select and read stories might help. These approaches may contribute to solving the disorientation in the public sphere, but our observations suggest that they will be working on the margins of the core challenge.

As the report notes, it would be easy if our news got poisoned chiefly by Russia or Macedonian teenagers, because that would be far easier to deal with than the fact that First Amendment protected free speech instead skewed our political debate so badly as to elect Trump. But addressing Russian propaganda or Facebook algorithms will still leave the underlying structure of a dangerously powerful and unhinged right wing noise machine intact.

Which makes “fake news,” like potential poll tampering even as state after state suppresses the vote of likely Democratic voters, another area where screaming about Russian influence distracts from the more proximate threat.

Or perhaps the focus on “fake news” is even worse. As the Berkman report notes, when rational observers spend inordinate time suggesting that fake news dominated the election when in fact sensational far right news did, it only normalizes the far right (and Trump) claims that the news is fake. Not to mention the way labeling further left, but totally legitimate, outlets as fake news normalized coverage even further to the right than the asymmetric environment already was.

Fake news is a problem — as is the increasing collapse in confidence in US ideology generally. But it’s not a bigger problem than Breitbart. And as Bannon returns to his natural lair, the left needs to turn its attention to the far harder, but far more important, challenge of Breitbart.

Yah, These ARE The Droids We Have Been Looking For And Fearing

I did not always write about it so much here, but I got fairly deep into “Deflategate” analysis and law when it was going on. Because it was fascinating. I met so many lawyers, professors and others, it was bonkers. Have remained friends with many, if not most, of all of them. One is Alexandra J. Roberts, which is kind of funny because she was not necessarily one of the major players. Yet, she is one of the enduring benefits I have come to love from the bigger picture.

Today, Ms Roberts advises of some R2D2 like cop robots. I “might” have engaged in some frivolity in response. But, really, it is a pretty notable moment.

Police droids on the ground? Police drones in the air? You think Kyllo will protect you from a Supreme Court with Neil Gorsuch on it? Hell, you think Merrick Garland would not have done what he has done all of his life and sign off on ever greater law enforcement collection and oppression? Not a chance in hell. Neither Gorsuch, nor Garland, would ever have penned what Scalia did in Kyllo:

It would be foolish to contend that the degree of privacy secured to citizens by the Fourth Amendment has been entirely unaffected by the advance of technology. For example, as the cases discussed above make clear, the technology enabling human flight has exposed to public view (and hence, we have said, to official observation) uncovered portions of the house and its curtilage that once were private. See Ciraolo, supra, at 215. The question we confront today is what limits there are upon this power of technology to shrink the realm of guaranteed privacy.

So, with no further adieu, here, via the Bo Globe, is the deal:

There’s a new security officer in town. But this one runs on batteries, not Dunkin’ Donuts.

Next time you’re visiting the Prudential Center, don’t be alarmed if you bump into a large, rolling robot as it travels the corridors where shoppers pop in and out of stores.

No, it’s not an oversized Roomba on the loose. It’s the “Knightscope K5,” an egg-shaped autonomous machine equipped with real-time monitoring and detection technology that allows it to keep tabs on what’s happening nearby.

Marvelous! R2D2 is making us all safer!

Nope. Sorry. Safe streets, broken windows, and “cop on the beat” policing cannot be accomplished by a tin can.

Just Say No to this idiotic and lazy policing bullshit. The next thing you know, the tin can will be probable cause. And Neil Gorsuch will help further that craven “good faith” reliance opinion in a heartbeat.

Parting Shot: Holy hell, we have our first reference to hate crimes for anti-cop robot violence! See here.

Frankly, having been in the field for three decades, I think the thought that cops are proper “hate crime” victims is absurd. Honestly, all “hate crimes” laws are completely absurd as they create different and more, and less, valuable classes of human crime victims. This may sound lovely to you in the safety of your perch, where you want to lash out at the evil others.

But if the “all men are created equal” language in the Declaration of Independence is to be given the meaning that so many demagogues over American history assign to it, then the “hate crimes” segregation and preference of one set of human victims over others, is total unfathomable bullshit.

That is just as to humans. Let’s not even go to the “victim’s rights” of squeaky ass little R2D2 tin cans.

What Fake French News Looks Like (to a British Consulting Company)

Along with reports that APT 28 targeted Emmanuel Macron that don’t prominently reveal that Macron believes he withstood the efforts to phish his campaign, the post-mortem on the first round of the French election has also focused on the fake news that supported Marine Le Pen.

As a result, this study — the headline from which claimed 25% of links shared during the French election pointed to fake news — has gotten a lot of attention.

The study, completed by a British consulting firm (though the lead on the study is a former French journalist) and released in full only in English, is as interesting for its assumptions as anything else.

Engagement studies aren’t clear what they’re showing, but this one is aware of that

Before I explain why, let me stipulate that accept the report’s conclusion that a ton of Le Pen supporters (though it doesn’t approach it from that direction) relied on fake news and/or Russian sources. The methodology appears to suffer from the same problem some of BuzzFeed’s reporting on fake news does, in that it doesn’t measure the value of shared news, but at least it admits that methodological problem (and promises to discuss it at more length in a follow-up).

Sharing is the overt act of taking an article or video or image that one sees in social media and, literally, sharing it digitally with one’s own followers or even into the public domain. Sharing therefore implies an elevated level of interest: people share articles that they feel others should see. While there are tools that help us track and quantify how many articles are shared, they cannot explain the sharer’s intention. It seems plausible, particularly in a political context, that sharing implies endorsement, yet even this is problematic as sharing can often imply shock and disagreement. In the third instalment [sic] of this study, Bakamo will explore in depth the extent to which people agree or disagree with what they share, but for this report (and the second, updated version), the simple act of sharing—whatever the intention—is nonetheless highly relevant. It provides a way of gauging activity and engagement.

[snip]

These are the “likes” or “shares” in Facebook, or “favourites” or “retweets” in Twitter. While these can be counted, we do not know whether the person has actually clicked through to read the content being shared before they like or retweet. This information is only available to the account owner. One of the questions that is often raised about social media is whether users do indeed read the article or respond simply to the headlines that appear in their newsfeed. We are unable to comment on this.

In real word terms, engagement can be two things. It can be agreement—whether reflexive or reflective—with the content shared. It can also, however, be disagreement: Facebook’s nuanced “like” system (in which anger is a valid form of engagement) or Twitter’s citations that enable a user to comment on the link while sharing it both permit these negative expressions.

The study is perhaps most interesting for what it shows about the differing sharing habits from different parts of its media economy, with no overlap between those who share what it deems “traditional” media and those who share what I’d deem conspiracist media. That finding, more than almost any other one, suggests what might be needed to engage in a dialogue across these clusters. Ultimately, what the study shows is increased media polarization not on partisan grounds, but on response to globalization.

Russian media looks very important when you only track Russian media

As I noted, one of the headlines that has been taken away from this study is that Le Pen voters shared a lot of Russian news sources — and I don’t contest that.

But there are two interesting details about how that finding came to be that important to this study.

First, the study defines everything in contradistinction from what it calls “traditional” media.

There are broad five sections of the Media Map. They are defined by their editorial distance from traditional media narratives. The less accepting a source is of traditional media narratives, the farther away it is (spatially) on the Map.

In the section defining traditional media, the study focuses on establishment and commercialism (including advertising), even while pointing to — but not proving — that all traditional media “adher[e] to journalistic standards” (which is perhaps a fairer assumption still in France than in the US or UK, but nevertheless it is an assumption).

This section of the Media Map is populated by media sources that belong to the established commercial and conventional media landscape, such as websites of national and regional newspapers, TV and radio stations, online portals adhering to journalistic standards, and news aggregators.

It does this, but insists that this structure that privileges “traditional” media without proving that it merits that privilege is not meant to “pass moral judgement or to define what is ‘good’ or ‘evil’.”

Most interesting of all, the study includes — without detail or interrogation — international media sources “exhibiting these same characteristics” in its traditional media category.

These are principally France-based sources; however, French-speaking international media sources exhibiting these same characteristics were also placed into the Traditional Media section.

But, having defined some international news sources as “traditional,” the study then uses Russian influence as a measure of whether a media cluster was non-traditional.

The analysis only identified foreign influence connected with Russia. No other foreign source of influence was detected.

It did this — measuring Russian influence as a measure of non-traditional status — even though the study showed this was true primarily on the hard right and among conspiracists.

Syria as a measure of journalistic standards

Among the other kinds of content that this study measures, it repeatedly describes how those outlets it has clustered as non-traditional (primarily those it calls reframing outlets) deal with Syria.

It asserts that those who treat Bashar al-Assad as a “protagonist” in the Syrian civil war as being influenced by Russian sources.

A dominant theme reflected by sources where Russian influence is detected is the war in Syria, the various actors involved, and the refugee crisis. In these articles, Bachar Assad becomes the protagonist, a perspective opposite to that which is reported by traditional media. Articles touching on refugees and migrants tend to reinforce anti-Islam and anti-migrant positions.

The anti-imperialists focus on Trump’s ineffectual missile strike on Syria which — the study concludes — must derive from Russian influence.

Trump’s “téléréalité” attack on Syria is a more recent example of content in this cluster. This is not surprising, however, as Russian influence is detectable on a number of sites in this cluster.

It defines conspiracists as such because they say the US supports terrorist groups (and also because they portray Assad as trustworthy).

Syria is an important theme in this cluster. Per these sources, and contrary to reports in traditional media, the Western powers are supporting the terrorist, while Bashar Assad is trustworthy and tolerant leader, as witness reports prove.

The pro-Islam non-traditional (!!) cluster is defined not because of its distance from “traditional” news (which the study finds it generally is not) but in part because its outlets suggest the US has been supporting Assad.

American imperialism is another dominant theme in this cluster, driven by the belief that the US has been secretly supporting the Assad regime.

You can see, now, the problem here. It is a demonstrable fact that America’s covert funding did, for some time, support rebel groups that worked alongside Al Qaeda affiliates (and predictably and with the involvement of America’s Sunni allies saw supplies funneled to al Qaeda or ISIS as a result). It is also the case that both historically (when the US was rendering Maher Arar to Syria to be tortured) and as an interim measure to forestall the complete collapse of Syria under Obama, the US’ opposition to Assad has been half-hearted, which may not be support but certainly stopped short of condemnation for his atrocities.

And while we’re not supposed to talk about these things — and don’t, in part, because they are an openly acknowledged aspect of our covert operations — they are a better representation of the complex clusterfuck of American intervention in Syria than one might get — say — from the French edition of the BBC. They are, of course, similar to the American “traditional” news insistence that Obama has done “nothing” in Syria, long after Chuck Hagel confirmed our “covert” operations there. Both because the reality is too complex to discuss easily, and because there is a “tradition” of not reporting on even the most obvious covert actions if done by the US, Syria is a subject on which almost no one is providing an adequately complex picture of what is going on.

On both sides of the Atlantic, the measure of truth on Syria has become the simplified narrative you’re supposed to believe, not what the complexity of the facts show. And that’s before you get to where we are now, pretending to be allied with both Turkey and the Kurds they’re shooting at.

The shock at the breakdown of the left-right distinction

What’s most fascinating about the study, however, is the seeming distress with which it observes that “reframing” media — outlets it claims is reinterpreting the real news — doesn’t break down into a neat left-right axis.

Media sources in the Reframe section share the motivation to counter the Traditional Media narrative. The media sources see themselves as part of a struggle to “reinform” readers of the real contexts and meanings hidden from them when they are informed by Traditional Media sources. This section breaks with the traditions of journalism, expresses radical opinions, and refers to both traditional and alternative sources to craft a disruptive narrative. While there is still a left-right distinction in this section, a new narrative frame emerges where content is positioned as being for or against globalisation and not in left-right terms. Indeed, the further away media sources are from the Traditional section, the less a conventional left-right attribution is possible.

[snip]

The other narrative frame detectable through content analysis is the more recent development referred to in this study as the global versus local narrative frame. Content published in this narrative frame is positioned as being for or against globalisation and not in left-right terms. Indeed, the further away media sources are from the Traditional section, the less a conventional left-right attribution is possible. While there are media sources in the Reframe section on both on the hard right and hard left sides, they converge in the global versus local narrative frame. They take concepts from both left and right, but reframe them in a global-local context. One can find left or right leanings of media sources located in the middle of Reframe section, but this mainly relates to attitudes about Islam and migrants. Otherwise, left and right leaning media sources in the Reframe section share one common enemy: globalisation and the liberal economics that is associated with it.

Now, I think some of the study’s clustering is artificial to create this split (for example, in the way it treats environmentalism as an extend rather than reframe cluster).

But even more, I find the confusion fascinating. Particularly in the absence of — as it did for Syria coverage — any indication of what is considered the “true” or “false” news about globalization. Opposition to globalization, as such, is the marker, not a measure of whether an outlet is reporting in factual manner on the status and impact and success at delivering the goals of globalization.

And if the patterns of sharing in the study are in fact accurate, what the study actually shows is that the ideologies of globalization and nationalism have become completely incoherent to each other. And purveyors of globalization as the “traditional” view do not, here, consider the status of globalization (on either side) as a matter of truth or falseness, as a measure whether the media outlet taking a side in favor of or against globalization adheres to the truth.

I’ve written a fair amount of the failure of American ideology — and of the confusion among priests of that ideology as it no longer exacts unquestioning sway.

This study on fake news in France completed by a British consulting company in English is very much a symptom of that process.

But the Cold War is outdated!

Which brings me to the funniest part of the paper. As noted above, the paper claims that anti-imperialists are influenced by Russian sources, which it explains for criticism of Trump’s Patriot missile strike on Syria. But it’s actually talking about what it calls a rump Communist Cold War ideology.

This cluster contains the remains of the traditional Communist groupings. They publish articles on the imperialist system. They concentrate on foreign politics and ex-Third World countries. They frame their worldview through a Cold War logic: they see the West (mainly the US) versus the East, embodied by Russia. Russia is idolised, hence these sites have a visible anti-American and antiZionist stance. The antiquated nature of a Cold War frame given the geo-political transformations of the last 25 years means these sources are often forced to borrow ideas from the extreme right.

Whatever the merit in its analysis here, consider what it means for a study the assumptions of which treat Russian influence as a special kind of international influence, even while conducting no reflection on whether the globalization/nationalization polarization it finds so striking can be measured in terms of fact claims.

The new Cold War seems unaware that the old Cold War isn’t so out of fashion after all.

Facebook Claims Just .1% of Election Related Sharing Was Information Operations

In a fascinating report on the use of the social media platform for Information Operations released yesterday, Facebook make a startling claim. Less than .1% of what got shared during the election was shared by accounts set up to engage in malicious propaganda.

Concurrently, a separate set of malicious actors engaged in false amplification using inauthentic Facebook accounts to push narratives and themes that reinforced or expanded on some of the topics exposed from stolen data. Facebook conducted research into overall civic engagement during this time on the platform, and determined that the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election.12

In short, while we acknowledge the ongoing challenge of monitoring and guarding against information operations, the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues.

12 To estimate magnitude, we compiled a cross functional team of engineers, analysts, and data scientists to examine posts that were classified as related to civic engagement between September and December 2016. We compared that data with data derived from the behavior of accounts we believe to be related to Information Operations. The reach of the content spread by these accounts was less than one-tenth of a percent of the total reach of civic content on Facebook.

That may seem  like a totally bogus number — and it may well be! But to assess it, understand what they’re measuring.

That’s one of the laudable aspects of the report: it tries to break down the various parts of the process, distinguishing things like “disinformation” — inaccurate information spread intentionally — from “misinformation” — inaccurate information spread without malicious intent.

Information (or Influence) Operations – Actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome. These operations can use a combination of methods, such as false news, disinformation, or networks of fake accounts (false amplifiers) aimed at manipulating public opinion.

False News– News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Having thus defined those terms, Facebook distinguishes further between false news sent with malicious intent from that sent for other purposes — such as to make money. In this passage, Facebook also acknowledges the important detail for it: false news doesn’t work without amplification.

Intent: The purveyors of false news can be motivated by financial incentives, individual political motivations, attracting clicks, or all the above. False news can be shared with or without malicious intent. Information operations, however, are primarily motivated by political objectives and not financial benefit.

Medium: False news is primarily a phenomenon related to online news stories that purport to come from legitimate outlets. Information operations, however, often involve the broader information ecosystem, including old and new media.

Amplification: On its own, false news exists in a vacuum. With deliberately coordinated amplification through social networks, however, it can transform into information operations

So the stat above — the amazingly low .1% — is just a measure of the amplification of stories by Facebook accounts created for the purpose of maliciously amplifying certain fake stories; it doesn’t count the amplification of fake stories by people who believe them or who aren’t formally engaged in an information operation. Indeed, the report notes that after an entity amplifies something falsely, “organic proliferation of the messaging and data through authentic peer groups and networks [is] inevitable.” The .1% doesn’t count Trump’s amplification of stories (or of his followers).

Furthermore, the passage states it is measuring accounts that “reinforced or expanded on some of the topics exposed from stolen data,” which would seem to limit which fake stories it tracked, including things like PizzaGate (which derived in part from a Podesta email) but not the fake claim that the Pope endorsed Trump (though later on the report says it identifies false amplifiers by behavior, not by content).

The entire claim raises questions about how Facebook identifies which are the false amplifiers and which are the accounts “authentically” sharing false news. In a passage boasting of how it has already suspended 30,000 fake accounts in the context of the French election, the report includes an image that suggests part of what it does to identify the fake accounts is identifying clusters of like activity.

But in the US election section, the report includes a coy passage stating that it cannot definitively attribute who sponsored the false amplification, even while it states that its data does not contradict the Intelligence Community’s attribution of the effort to Russian intelligence.

Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.

That presents the possibility (one that is quite likely) that Facebook has far more specific forensic data on the .1% of accounts it deems malicious amplifiers that it coyly suggests it knows to be Russian intelligence. Note, too, that the report is quite clear that this is human-driven activity, not bot-driven.

So the .1% may be a self-serving number, based on a definition drawn so narrowly as to be able to claim that Russian spies spreading propaganda make up only a tiny percentage of activity within what it portrays as the greater vibrant civic world of Facebook.

Alternately, it’s a statement of just how powerful Facebook’s network effect is, such that a very small group of Russian spies working on Facebook can have an outsized influence.

 

BuzzFeed Now Looking to Institutional Dems to Police a Phantom Surge of Lefty Fake News

One of my many concerns about the fake fake news scare is that it provides a way to discredit alternative voices, as the PropOrNot effort tried to discredit a number of superb outlets that don’t happen to share PropOrNot’s Neocon approach to Syria. BuzzFeed, in its seemingly unquenchable desire to generate buzz by inflating the threat of fake news, takes that a step further by turning to institutional Democratic outlets — outlets whose credibility got damaged by Hillary’s catastrophic loss — to police an alleged surge of fake news on the left.

First, consider its evidence for a surge in Democrats embracing fake news.

There are new cases daily. Suspicions about his 2020 reelection filing. Theories about the “regime’s” plan for a “coup d’état against the United States” (complete with Day After Tomorrow imagery of New York City buried in snow). Stories based on an unverified Twitter account offering supposed “secrets” from “rogue” White House staffers (followed by more than 650,000 people). Even theories about the Twitter account (“Russian disinformation”).

Since the election, the debunking website Snopes has monitored a growing list of fake news articles aimed at liberals, shooting down stories about a new law to charge protesters with terrorism, a plan to turn the USS Enterprise into a floating casino, and a claim that Vice President Mike Pence put himself through gay conversion therapy.

[snip]

Panicky liberal memes have cascaded across the internet in recent weeks, like an Instagram post regarding Steve Bannon’s powers on the National Security Council shared by a celebrity stylist and actress. Some trolls have even found success making fake news specifically aimed at tricking conservatives.

Let’s take the purported “fake news” story BuzzFeed bases its argument on, one by one:

  • debunking of a Twitter thread (not a finished news piece) of the conclusions about a discovery that Trump, very unusually for a President, filed for reelection immediately after inauguration. There’s no debunking that Trump filed his candidacy, nor that it is unusual, nor, even, that Trump is fundraising off it. That’s not fake news. It’s an attempt to figure out why Trump is doing something unusual, with a fact-checking process happening in the Twitter discussion.
  • An admittedly overblown Medium post about some of the shady things Trump has done, as well as the much rumored claim that the reported sale of 19% of Rosneft confirms the Trump dossier claim that Carter Page would get part of Rosneft if he could arrange the lifting of US sanctions on Russia. The story’s treatment — and especially it’s use of the word “coup” — is silly, but the underlying question of whether Trump will instruct agencies to ignore the law, as already happened in limited form at Dulles over the first weekend of the Muslim ban, as well as the question of how Trump intends to target people of color, is a real one.
  • A story basically talking about the formation of the RoguePotusStaff Twitter account that notes prominently that “there’s no way to verify the authenticity of the newly minted Twitter channel.” BuzzFeed provided no evidence this was being preferentially shared by people on the left.
  • A Twitter thread speculating, based off linguistic analysis, that the RoguePotusStaff account might be Russian disinformation. Again, BuzzFeed made no claims about who was responding to this thread.
  • A debunking of a claim posted in November on a conservative fake news site claiming that protestors would get charged with terrorism.
  • A “debunking” of a satirical story from November posted in the Duffel Blog claiming Trump was going to repurpose an aircraft carrier.
  • A debunking of a fake news story from November claim that Mike Pence had put himself through gay conversion therapy that notes Pence did, indeed, push gay conversation therapy.
  • A liberal trolling effort aimed at conservatives, which started in December, claimed that Trump had removed symbols of Islam from the White House.
  • An instagram post that (BuzzFeed snottily notes) got shared by an actress and a stylist reporting the true fact that Bannon had been added to the National Security Council and noting the arguably true fact that the NSC reviews the kill list including the possibility of targeting Americans (technically, the targeted killing review team installed by Obama is not coincident with the NSC, but it does overlap significantly, and Anwar al-Awlaki was targeted by that process).

Most of these things are not news! Most are not pretending to be news! The only single thing included among BuzzFeed’s “proof” that lefties are resorting to fake news that would support that claim is the Mike Pence story. And to get there, BuzzFeed has to pretend that the Duffel Blog is not explicitly satire, that multiple cases of conservative fake news are lefty fake news, that well-considered discussions on Twitter are fake news, and that we all have to stop following RoguePotusStaff because we don’t know whether its writers are really Rogue POTUS staffers or not.

It’s a shoddy series of claims that BuzzFeed should be embarrassed about making. Effectively, it is calling discussion and satire — including correction — fake news.

To BuzzFeed’s credit, after months of mis-stating what a poll it did revealed — BuzzFeed had been claiming that 75% of people believe fake news, but in reality the poll showed that 75% of those who recall fake news believe it — BuzzFeed finally got that, at least, correct. Bravo BuzzFeed!

But other than that, they’ve got almost nothing here.

Believe it or not, that’s not the most offensive part of this story. Having invented a lefty fake news problem out of satire and Twitter discussions, BuzzFeed then decided it’s important what official Democratic sources thing about it. While one Bernie source said it was best to ignore these things (another said it was a real problem), BuzzFeed framed other responses in terms of left protests of elected officials.

Democratic operatives and staffers at left-leaning media outlets predict that viral anti-Trump conspiracy theories will ultimately distract from real reporting about the administration, undermining legitimate causes for outrage on the left over what the administration is actually doing.

Still, for now, it’s a conversation that exists almost entirely outside the political class itself. Elected officials are not hawking phony stories as true, like Trump’s calls to investigate widespread voter fraud during the election. But that remove poses its own problems for leaders with no obvious way to dismantle widely shared false stories.

“It exists on the left and that’s a problem because it misinforms people,” said Judd Legum, editor in chief of progressive news site ThinkProgress. “That’s harmful in other ways because the time you’re spending talking about that, you could spend talking about other stuff.”

“It contributes to a broader environment of distrust, and it sort of accelerates the post-factual nature of our times,” said Teddy Goff, co-founder of Precision Strategies and a former senior aide to Barack Obama and Hillary Clinton. “Fake news is pretty damaging no matter who it benefits politically. No one on the left should think we ought to be replicating the fake news tactics on the right.”

[snip]

The online energy also raises questions about the party’s relationship with its base. In recent weeks, progressives have pressured lawmakers to adopt a tougher stance toward Trump and join ranks with the millions of protesters who marched over inauguration weekend.

The two top-ranking Democrats in Washington, Chuck Schumer in the Senate and Nancy Pelosi in the House, have both signaled an openness to working on legislation with Trump. Last week, protests formed outside Schumer’s home in Brooklyn. And among progressive activists online, Pelosi was met with vehement push-back after saying the party has a “responsibility to the American people to find our common ground.”

“Elected Democrats are stuck struggling to keep ahead of the anger that the base is feeling right now,” said [Jim] Manley, the former Reid adviser. “It’s very palpable.”

First, BuzzFeed is wrong in saying elected officials are not hawking phony stories as true. One reason the claim that Wikileaks doctored Democratic emails got so much traction is because Dems repeatedly made that claim (and as I’ve noted, Hillary quickly escalated the Alfa News story that most media outlets rejected as problematic).

Worse, BuzzFeed deems Democratic operatives and staffers as somehow chosen to decide what are “legitimate causes for outrage on the left over what the administration is actually doing.” It further suggests there’s a connection between people protesting elected leaders and fake news.

Finally, BuzzFeed shows absolutely no self-awareness about the people it seeks about and the stories they’ve pitched. Consider: Manley is in the very immediate vicinity of the people who got the WaPo to push the claim that CIA had decided Russia hacked the DNC in order to get Trump elected, a conclusion that — we’ve subsequently learned — is the single one any agency in the IC (in this case, the NSA) expressed less confidence in. Moreover, we know that Harry Reid spent months trying to get the FBI to reveal details included in the Trump dossier that no one has been able to confirm. And when the dossier was released, Judd Legum magnified it himself, in much the same way the Medium post did the Rosneft claim.

Oh, and as a reminder: BuzzFeed was the entity that decided it was a good idea to publish an unverified intelligence dossier in the first place!

I mean, if the institutional Dems that BuzzFeed has deemed the arbiters of what is “legitimate” to talk about think the unproven Russian dossier counts, then BuzzFeed has even less in its claim about fake news.

Nevertheless, it thought it was a good idea to assign two journalists to make thinly substantiated claims about a lefty news problem that it then used to police whether lefty protestors are doing the right thing.

The Three Most Believed Fake News Stories of the Election (Tested by Stanford) Favored Hillary

In a piece repeating erroneous BuzzFeed reporting, the Atlantic expresses concern that the left is now sharing fake news stories just like the right shared them during the election.

If progressives are looking to be shocked, terrified, or incensed, they have plenty of options. Yet in the past two weeks, many have turned to a different avenue: They have shared “fake news,” online stories that look like real journalism but are full of fables and falsehoods.It’s a funny reversal of the situation from November. In the weeks after the election, the press chastised conservative Facebook users for sharing stories that had nothing to do with reality. Hundreds of thousands of people shared stories asserting incorrectly that President Obama had banned the pledge of allegiance in public schools, that Pope Francis had endorsed Donald Trump, and that Trump had dispatched his personal plane to save 200 starving marines.
The phenomenon seemed to confirm theorists’ worst fears about the internet. Given the choice, democratic citizens will not seek out news that challenges their beliefs;  instead, they will opt for content that confirms their suspicions. A BuzzFeed News investigation found that more people shared these fake stories than shared real news in the three months before the election. A follow-up survey suggested that most Americans believed fake news after seeing it on Facebook. When held to the laissez faire editorial standards of Facebook, the market of ideas fails.

As I laid out, BuzzFeed’s claim that most Americans believe fake news was not what BuzzFeed’s poll actually showed; rather, it showed that those who remember fake stories believe them, but that works out to be a small fraction of the people who see the story. And this piece is one of many that points out some methodological problems with BuzzFeed’s count of fake news sharing.

The Atlantic then goes onto cite stuff (like the @AltNatParSer and @RoguePOTUSStaff) that is not verified but might be true but in any case is critique as the left’s new habit of fake news.

All that said, the Atlantic is right that the left can be sucked in by not-true news — but that was true during the election, too. Consider this Stanford study that, generally, found that fake news wasn’t as impactful as often claimed.

We estimate that in order for fake news to have changed the election result, the average fake story would need to have f ≈ 0.0073, making it about as persuasive as 36 television campaign ads.

Buried deep inside the story is a detail one or two people have noted, but not mentioned prominently. Among the fake news stories studied by the authors (which were limited to stories debunked at places like Snopes, which is a significant limit to the study), two stories favorable to Hillary were the most believed.

Blue here is the percentage of the US adult population that believed a story and red is being “not sure.” Both if you aggregate those two categories and if you take only those who affirmatively say they believed something, this story — claiming Congressman Jeff Denham helped broker Trump’s deal for the Trump Hotel in DC — and this story — repeating Kurt Eichenwald’s claim that he had proof WikiLeaks led all the fake stories Stanford tested, with close to 30% definitely believing both (see my post on that story). This story claiming Clinton paid Beyonce for a campaign appearance was the most-believed anti-Hillary story, which came after a third Hillary-friendly story claiming Trump was going to deport Lin Manuel-Miranda (note, as also shown in other studies, the fake news stories weren’t recalled or believed at the same rates as the true ones, though in the aggregate, the Denham story rivaled “small true” stories).

Note, the Stanford study did not test this story, which also claimed Wikileaks had doctored emails. It appeared on the same Clinton site three days earlier, which was itself based off a fake news created by a Hillary supporter (with some spooky ties), and magnified by Malcolm Nance and Joy Reid. Those two stories likely reinforced each other.

I’m interested in both of these stories — in part, because the reality about Trump’s corruption and his ties to Russia are both bad enough, without Democratic operatives inventing stories about it. But obviously, I’m particularly interested in the latter, in part because so even in spite of the real evidence implicating Russia in the hack of the DNC, Democrats tend to believe anything involving Russia without evidence.

That’s ironic, given that the risk of fake news is supposed to stem from Putin poisoning our airwaves.

Update: I’ve added “three” to the title because a number of people said it would make it more clear. Thanks to those who suggested it.

image_print