March 28, 2024 / by 

 

Facebook’s Global Data: A Parallel Intelligence Source Rivaling NSA

In April, Facebook released a laudable (if incredible) report on Russian influence operations on Facebook during the election; the report found that just .1% of what got shared in election related activity go shared by malicious state-backed actors.

Facebook conducted research into overall civic engagement during this time on the platform, and determined that the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election.

[snip]

The reach of the content spread by these accounts was less than one-tenth of a percent of the total reach of civic content on Facebook.

Facebook also rather coyly confirmed they had reached the same conclusion the Intelligence Community had about Russia’s role in tampering with the election.

Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.

While skeptics haven’t considered this coy passage (and Facebook certainly never called attention to it), it means a second entity with access to global data — like the NSA but private — believes Russia was behind the election tampering.

Yesterday, Facebook came out with another report, quantifying how many ads came from entities that might be Russian information operations. They searched for two different things. First, ads from obviously fake accounts. They found 470 inauthentic accounts paid for 3,000 ads costing $100,000. But most of those didn’t explicitly discuss a presidential candidate, and more of the geo-targeted ones appeared in 2015 than in 2016.

  • The vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate.
  • Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.
  • About one-quarter of these ads were geographically targeted, and of those, more ran in 2015 than 2016.
  • The behavior displayed by these accounts to amplify divisive messages was consistent with the techniques mentioned in the white paper we released in April about information operations.

Elsewhere Facebook has said some or all of these are associated with a troll farm, the Internet Research Agency, in Petersburg.

The Intelligence Community Report on the Russia hacks specifically mentioned the Internet Research Agency — suggesting it probably had close ties to Putin. But it also suggested there was significant advertising that was explicitly pro-Trump, which may be inconsistent with Facebook’s observation that the majority of these ads ran policy, rather than candidate ads.

Russia used trolls as well as RT as part of its influence efforts to denigrate Secretary Clinton. This effort amplified stories on scandals about Secretary Clinton and the role of WikiLeaks in the election campaign.

  • The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close Putin ally with ties to Russian intelligence.
  • A journalist who is a leading expert on the Internet Research Agency claimed that some social media accounts that appear to be tied to Russia’s professional trolls—because they previously were devoted to supporting Russian actions in Ukraine—started to advocate for President-elect Trump as early as December 2015.

The other thing Facebook did was measure how many ads that might have originated in Russia without mobilizing an obviously fake account. That added another $50,000 in advertising to the pot of potential Russian disinformation.

In this latest review, we also looked for ads that might have originated in Russia — even those with very weak signals of a connection and not associated with any known organized effort. This was a broad search, including, for instance, ads bought from accounts with US IP addresses but with the language set to Russian — even though they didn’t necessarily violate any policy or law. In this part of our review, we found approximately $50,000 in potentially politically related ad spending on roughly 2,200 ads.

Still, that’s not all that much — it may explain why Facebook found only .1% of activity was organized disinformation.

In its report, Facebook revealed that it had shared this information with those investigating the election.

We have shared our findings with US authorities investigating these issues, and we will continue to work with them as necessary.

Subsequent reporting has made clear that includes Congressional Committees and Robert Mueller’s team. I’m curious whether Mueller made the request (whether using legal process or no), and Facebook took it upon themselves to share the topline data publicly. If so, we should be asking where the results of similar requests to Twitter and Google are.

I’m interested in this data — though I agree with both those that argue we need to make sure this advertising gets reviewed in campaign regulations, and those who hope independent scholars can review and vet Facebook’s methodology. But I’m as interested that we’re getting it.

Facebook isn’t running around bragging about this; if too many people groked it, more and more might stop using Facebook. But what these two reports from Facebook both reflect is the global collection of intelligence. The intelligence is usually used to sell highly targeted advertisements. But in the wake of Russia’s tampering with last year’s election, Facebook has had the ability to take a global view of what occurred. Arguably, it has shared more of that intelligence than the IC has, and in the specific detail regarding whether Internet Research Agency focused more on Trump or on exacerbating racial divisions in the country, it has presented somewhat different results than the IC has.

So in addition to observing (and treating just as skeptically as we would data from the NSA) the data Facebook reports, we would do well to recognize that we’re getting reports from a parallel global intelligence collector.


How the “Fake News” Panic Fed Breitbart

In just about every piece I wrote on “fake news” in the last year, I argued that the most influential piece of fake news of the campaign came not from the Russians, but instead from Fox News, in the form of Bret Baier’s early November “scoop” that Hillary Clinton would soon be indicted.

I was partly wrong about that claim. But substantially correct.

That’s the conclusion drawn by a report released by Harvard’s Berkman Klein Center last week. The report showed that the key dynamic behind Trump’s win came from the asymmetric polarization of our media sphere, embodied most dramatically in the way that Breitbart not only created a bubble for conservatives, but affected the overall agenda of the press, particularly with immigration (a conclusion that is all the more important given Steve Bannon’s return to Breitbart just as white supremacist protests gather in intensity).

So I was correct that the most important fake news was coming from right wing sites. I just pointed to Fox News, instead of the increasingly dominant Breitbart (notably, while Baier retracted his indictment claim, Breitbart didn’t stop magnifying it).

Here’s what the report had to say about the “fake news” that many liberals instead focused on.

Our data suggest that the “fake news” framing of what happened in the 2016 campaign, which received much post-election attention, is a distraction. Moreover, it appears to reinforce and buy into a major theme of the Trump campaign: that news cannot be trusted. The wave of attention to fake news is grounded in a real phenomenon, but at least in the 2016 election it seems to have played a relatively small role in the overall scheme of things. We do indeed find stories in our data set that come from sites, like Ending the Fed, intended as political clickbait to make a profit from Facebook, often with no real interest in the political outcome. But while individual stories may have succeeded in getting attention, these stories are usually of tertiary significance. In a scan of the 100 most shared stories in our Twitter and Facebook sets, the most widely shared fake news stories (in this sense of profit-driven Facebook clickbait) were ranked 66th and 55th by Twitter and Facebook shares, respectively, and on both Twitter and Facebook only two of the top 100 stories were from such sites. Out of two million stories, that may seem significant, but in the scheme of an election, it seems more likely to have yielded returns to its propagators than to have actually swayed opinions in significant measure. When we look at our data week by week, prominent fake news stories of this “Macedonian” type are rare and were almost never among the most significant 10 or 20 stories of the week, much less the election as a whole. Disinformation and propaganda from dedicated partisan sites on both sides of the political divide played a much greater role in the election. It was more rampant, though, on the right than on the left, as it took root in the dominant partisan media on the right, including Breitbart, Daily Caller, and Fox News. Moreover, the most successful examples of these political clickbait stories are enmeshed in a network of sites that have already created, circulated, and validated a set of narrative lines and tropes familiar within their network. The clickbait sites merely repackage and retransmit these already widely shared stories. We document this dynamic for one of the most successful such political clickbait stories, published by Ending the Fed, in the last chapter of this report, and we put it in the context of the much more important role played by Breitbart, Fox News, and the Daily Caller in reorienting the public conversation after the Democratic convention around the asserted improprieties associated with the Clinton Foundation.

Our observations suggest that fixing the American public sphere may be much harder than we would like. One feature of the more widely circulated explanations of our “post-truth” moment—fake news sites seeking Facebook advertising, Russia engaging in a propaganda war, or information overload leading confused voters to fail to distinguish facts from false or misleading reporting—is that these are clearly inconsistent with democratic values, and the need for interventions to respond to them is more or less indisputable. If profit-driven fake news is the problem, solutions like urging Facebook or Google to use technical mechanisms to identify fake news sites and silence them by denying them advertising revenue or downgrading the visibility of their sites seem, on their face, not to conflict with any democratic values. Similarly, if a foreign power is seeking to influence our democratic process by propagandistic means, then having the intelligence community determine how this is being done and stop it is normatively unproblematic. If readers are simply confused, then developing tools that will feed them fact-checking metrics while they select and read stories might help. These approaches may contribute to solving the disorientation in the public sphere, but our observations suggest that they will be working on the margins of the core challenge.

As the report notes, it would be easy if our news got poisoned chiefly by Russia or Macedonian teenagers, because that would be far easier to deal with than the fact that First Amendment protected free speech instead skewed our political debate so badly as to elect Trump. But addressing Russian propaganda or Facebook algorithms will still leave the underlying structure of a dangerously powerful and unhinged right wing noise machine intact.

Which makes “fake news,” like potential poll tampering even as state after state suppresses the vote of likely Democratic voters, another area where screaming about Russian influence distracts from the more proximate threat.

Or perhaps the focus on “fake news” is even worse. As the Berkman report notes, when rational observers spend inordinate time suggesting that fake news dominated the election when in fact sensational far right news did, it only normalizes the far right (and Trump) claims that the news is fake. Not to mention the way labeling further left, but totally legitimate, outlets as fake news normalized coverage even further to the right than the asymmetric environment already was.

Fake news is a problem — as is the increasing collapse in confidence in US ideology generally. But it’s not a bigger problem than Breitbart. And as Bannon returns to his natural lair, the left needs to turn its attention to the far harder, but far more important, challenge of Breitbart.


Yah, These ARE The Droids We Have Been Looking For And Fearing

I did not always write about it so much here, but I got fairly deep into “Deflategate” analysis and law when it was going on. Because it was fascinating. I met so many lawyers, professors and others, it was bonkers. Have remained friends with many, if not most, of all of them. One is Alexandra J. Roberts, which is kind of funny because she was not necessarily one of the major players. Yet, she is one of the enduring benefits I have come to love from the bigger picture.

Today, Ms Roberts advises of some R2D2 like cop robots. I “might” have engaged in some frivolity in response. But, really, it is a pretty notable moment.

Police droids on the ground? Police drones in the air? You think Kyllo will protect you from a Supreme Court with Neil Gorsuch on it? Hell, you think Merrick Garland would not have done what he has done all of his life and sign off on ever greater law enforcement collection and oppression? Not a chance in hell. Neither Gorsuch, nor Garland, would ever have penned what Scalia did in Kyllo:

It would be foolish to contend that the degree of privacy secured to citizens by the Fourth Amendment has been entirely unaffected by the advance of technology. For example, as the cases discussed above make clear, the technology enabling human flight has exposed to public view (and hence, we have said, to official observation) uncovered portions of the house and its curtilage that once were private. See Ciraolo, supra, at 215. The question we confront today is what limits there are upon this power of technology to shrink the realm of guaranteed privacy.

So, with no further adieu, here, via the Bo Globe, is the deal:

There’s a new security officer in town. But this one runs on batteries, not Dunkin’ Donuts.

Next time you’re visiting the Prudential Center, don’t be alarmed if you bump into a large, rolling robot as it travels the corridors where shoppers pop in and out of stores.

No, it’s not an oversized Roomba on the loose. It’s the “Knightscope K5,” an egg-shaped autonomous machine equipped with real-time monitoring and detection technology that allows it to keep tabs on what’s happening nearby.

Marvelous! R2D2 is making us all safer!

Nope. Sorry. Safe streets, broken windows, and “cop on the beat” policing cannot be accomplished by a tin can.

Just Say No to this idiotic and lazy policing bullshit. The next thing you know, the tin can will be probable cause. And Neil Gorsuch will help further that craven “good faith” reliance opinion in a heartbeat.

Parting Shot: Holy hell, we have our first reference to hate crimes for anti-cop robot violence! See here.

Frankly, having been in the field for three decades, I think the thought that cops are proper “hate crime” victims is absurd. Honestly, all “hate crimes” laws are completely absurd as they create different and more, and less, valuable classes of human crime victims. This may sound lovely to you in the safety of your perch, where you want to lash out at the evil others.

But if the “all men are created equal” language in the Declaration of Independence is to be given the meaning that so many demagogues over American history assign to it, then the “hate crimes” segregation and preference of one set of human victims over others, is total unfathomable bullshit.

That is just as to humans. Let’s not even go to the “victim’s rights” of squeaky ass little R2D2 tin cans.


What Fake French News Looks Like (to a British Consulting Company)

Along with reports that APT 28 targeted Emmanuel Macron that don’t prominently reveal that Macron believes he withstood the efforts to phish his campaign, the post-mortem on the first round of the French election has also focused on the fake news that supported Marine Le Pen.

As a result, this study — the headline from which claimed 25% of links shared during the French election pointed to fake news — has gotten a lot of attention.

The study, completed by a British consulting firm (though the lead on the study is a former French journalist) and released in full only in English, is as interesting for its assumptions as anything else.

Engagement studies aren’t clear what they’re showing, but this one is aware of that

Before I explain why, let me stipulate that accept the report’s conclusion that a ton of Le Pen supporters (though it doesn’t approach it from that direction) relied on fake news and/or Russian sources. The methodology appears to suffer from the same problem some of BuzzFeed’s reporting on fake news does, in that it doesn’t measure the value of shared news, but at least it admits that methodological problem (and promises to discuss it at more length in a follow-up).

Sharing is the overt act of taking an article or video or image that one sees in social media and, literally, sharing it digitally with one’s own followers or even into the public domain. Sharing therefore implies an elevated level of interest: people share articles that they feel others should see. While there are tools that help us track and quantify how many articles are shared, they cannot explain the sharer’s intention. It seems plausible, particularly in a political context, that sharing implies endorsement, yet even this is problematic as sharing can often imply shock and disagreement. In the third instalment [sic] of this study, Bakamo will explore in depth the extent to which people agree or disagree with what they share, but for this report (and the second, updated version), the simple act of sharing—whatever the intention—is nonetheless highly relevant. It provides a way of gauging activity and engagement.

[snip]

These are the “likes” or “shares” in Facebook, or “favourites” or “retweets” in Twitter. While these can be counted, we do not know whether the person has actually clicked through to read the content being shared before they like or retweet. This information is only available to the account owner. One of the questions that is often raised about social media is whether users do indeed read the article or respond simply to the headlines that appear in their newsfeed. We are unable to comment on this.

In real word terms, engagement can be two things. It can be agreement—whether reflexive or reflective—with the content shared. It can also, however, be disagreement: Facebook’s nuanced “like” system (in which anger is a valid form of engagement) or Twitter’s citations that enable a user to comment on the link while sharing it both permit these negative expressions.

The study is perhaps most interesting for what it shows about the differing sharing habits from different parts of its media economy, with no overlap between those who share what it deems “traditional” media and those who share what I’d deem conspiracist media. That finding, more than almost any other one, suggests what might be needed to engage in a dialogue across these clusters. Ultimately, what the study shows is increased media polarization not on partisan grounds, but on response to globalization.

Russian media looks very important when you only track Russian media

As I noted, one of the headlines that has been taken away from this study is that Le Pen voters shared a lot of Russian news sources — and I don’t contest that.

But there are two interesting details about how that finding came to be that important to this study.

First, the study defines everything in contradistinction from what it calls “traditional” media.

There are broad five sections of the Media Map. They are defined by their editorial distance from traditional media narratives. The less accepting a source is of traditional media narratives, the farther away it is (spatially) on the Map.

In the section defining traditional media, the study focuses on establishment and commercialism (including advertising), even while pointing to — but not proving — that all traditional media “adher[e] to journalistic standards” (which is perhaps a fairer assumption still in France than in the US or UK, but nevertheless it is an assumption).

This section of the Media Map is populated by media sources that belong to the established commercial and conventional media landscape, such as websites of national and regional newspapers, TV and radio stations, online portals adhering to journalistic standards, and news aggregators.

It does this, but insists that this structure that privileges “traditional” media without proving that it merits that privilege is not meant to “pass moral judgement or to define what is ‘good’ or ‘evil’.”

Most interesting of all, the study includes — without detail or interrogation — international media sources “exhibiting these same characteristics” in its traditional media category.

These are principally France-based sources; however, French-speaking international media sources exhibiting these same characteristics were also placed into the Traditional Media section.

But, having defined some international news sources as “traditional,” the study then uses Russian influence as a measure of whether a media cluster was non-traditional.

The analysis only identified foreign influence connected with Russia. No other foreign source of influence was detected.

It did this — measuring Russian influence as a measure of non-traditional status — even though the study showed this was true primarily on the hard right and among conspiracists.

Syria as a measure of journalistic standards

Among the other kinds of content that this study measures, it repeatedly describes how those outlets it has clustered as non-traditional (primarily those it calls reframing outlets) deal with Syria.

It asserts that those who treat Bashar al-Assad as a “protagonist” in the Syrian civil war as being influenced by Russian sources.

A dominant theme reflected by sources where Russian influence is detected is the war in Syria, the various actors involved, and the refugee crisis. In these articles, Bachar Assad becomes the protagonist, a perspective opposite to that which is reported by traditional media. Articles touching on refugees and migrants tend to reinforce anti-Islam and anti-migrant positions.

The anti-imperialists focus on Trump’s ineffectual missile strike on Syria which — the study concludes — must derive from Russian influence.

Trump’s “téléréalité” attack on Syria is a more recent example of content in this cluster. This is not surprising, however, as Russian influence is detectable on a number of sites in this cluster.

It defines conspiracists as such because they say the US supports terrorist groups (and also because they portray Assad as trustworthy).

Syria is an important theme in this cluster. Per these sources, and contrary to reports in traditional media, the Western powers are supporting the terrorist, while Bashar Assad is trustworthy and tolerant leader, as witness reports prove.

The pro-Islam non-traditional (!!) cluster is defined not because of its distance from “traditional” news (which the study finds it generally is not) but in part because its outlets suggest the US has been supporting Assad.

American imperialism is another dominant theme in this cluster, driven by the belief that the US has been secretly supporting the Assad regime.

You can see, now, the problem here. It is a demonstrable fact that America’s covert funding did, for some time, support rebel groups that worked alongside Al Qaeda affiliates (and predictably and with the involvement of America’s Sunni allies saw supplies funneled to al Qaeda or ISIS as a result). It is also the case that both historically (when the US was rendering Maher Arar to Syria to be tortured) and as an interim measure to forestall the complete collapse of Syria under Obama, the US’ opposition to Assad has been half-hearted, which may not be support but certainly stopped short of condemnation for his atrocities.

And while we’re not supposed to talk about these things — and don’t, in part, because they are an openly acknowledged aspect of our covert operations — they are a better representation of the complex clusterfuck of American intervention in Syria than one might get — say — from the French edition of the BBC. They are, of course, similar to the American “traditional” news insistence that Obama has done “nothing” in Syria, long after Chuck Hagel confirmed our “covert” operations there. Both because the reality is too complex to discuss easily, and because there is a “tradition” of not reporting on even the most obvious covert actions if done by the US, Syria is a subject on which almost no one is providing an adequately complex picture of what is going on.

On both sides of the Atlantic, the measure of truth on Syria has become the simplified narrative you’re supposed to believe, not what the complexity of the facts show. And that’s before you get to where we are now, pretending to be allied with both Turkey and the Kurds they’re shooting at.

The shock at the breakdown of the left-right distinction

What’s most fascinating about the study, however, is the seeming distress with which it observes that “reframing” media — outlets it claims is reinterpreting the real news — doesn’t break down into a neat left-right axis.

Media sources in the Reframe section share the motivation to counter the Traditional Media narrative. The media sources see themselves as part of a struggle to “reinform” readers of the real contexts and meanings hidden from them when they are informed by Traditional Media sources. This section breaks with the traditions of journalism, expresses radical opinions, and refers to both traditional and alternative sources to craft a disruptive narrative. While there is still a left-right distinction in this section, a new narrative frame emerges where content is positioned as being for or against globalisation and not in left-right terms. Indeed, the further away media sources are from the Traditional section, the less a conventional left-right attribution is possible.

[snip]

The other narrative frame detectable through content analysis is the more recent development referred to in this study as the global versus local narrative frame. Content published in this narrative frame is positioned as being for or against globalisation and not in left-right terms. Indeed, the further away media sources are from the Traditional section, the less a conventional left-right attribution is possible. While there are media sources in the Reframe section on both on the hard right and hard left sides, they converge in the global versus local narrative frame. They take concepts from both left and right, but reframe them in a global-local context. One can find left or right leanings of media sources located in the middle of Reframe section, but this mainly relates to attitudes about Islam and migrants. Otherwise, left and right leaning media sources in the Reframe section share one common enemy: globalisation and the liberal economics that is associated with it.

Now, I think some of the study’s clustering is artificial to create this split (for example, in the way it treats environmentalism as an extend rather than reframe cluster).

But even more, I find the confusion fascinating. Particularly in the absence of — as it did for Syria coverage — any indication of what is considered the “true” or “false” news about globalization. Opposition to globalization, as such, is the marker, not a measure of whether an outlet is reporting in factual manner on the status and impact and success at delivering the goals of globalization.

And if the patterns of sharing in the study are in fact accurate, what the study actually shows is that the ideologies of globalization and nationalism have become completely incoherent to each other. And purveyors of globalization as the “traditional” view do not, here, consider the status of globalization (on either side) as a matter of truth or falseness, as a measure whether the media outlet taking a side in favor of or against globalization adheres to the truth.

I’ve written a fair amount of the failure of American ideology — and of the confusion among priests of that ideology as it no longer exacts unquestioning sway.

This study on fake news in France completed by a British consulting company in English is very much a symptom of that process.

But the Cold War is outdated!

Which brings me to the funniest part of the paper. As noted above, the paper claims that anti-imperialists are influenced by Russian sources, which it explains for criticism of Trump’s Patriot missile strike on Syria. But it’s actually talking about what it calls a rump Communist Cold War ideology.

This cluster contains the remains of the traditional Communist groupings. They publish articles on the imperialist system. They concentrate on foreign politics and ex-Third World countries. They frame their worldview through a Cold War logic: they see the West (mainly the US) versus the East, embodied by Russia. Russia is idolised, hence these sites have a visible anti-American and antiZionist stance. The antiquated nature of a Cold War frame given the geo-political transformations of the last 25 years means these sources are often forced to borrow ideas from the extreme right.

Whatever the merit in its analysis here, consider what it means for a study the assumptions of which treat Russian influence as a special kind of international influence, even while conducting no reflection on whether the globalization/nationalization polarization it finds so striking can be measured in terms of fact claims.

The new Cold War seems unaware that the old Cold War isn’t so out of fashion after all.


Facebook Claims Just .1% of Election Related Sharing Was Information Operations

In a fascinating report on the use of the social media platform for Information Operations released yesterday, Facebook make a startling claim. Less than .1% of what got shared during the election was shared by accounts set up to engage in malicious propaganda.

Concurrently, a separate set of malicious actors engaged in false amplification using inauthentic Facebook accounts to push narratives and themes that reinforced or expanded on some of the topics exposed from stolen data. Facebook conducted research into overall civic engagement during this time on the platform, and determined that the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election.12

In short, while we acknowledge the ongoing challenge of monitoring and guarding against information operations, the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues.

12 To estimate magnitude, we compiled a cross functional team of engineers, analysts, and data scientists to examine posts that were classified as related to civic engagement between September and December 2016. We compared that data with data derived from the behavior of accounts we believe to be related to Information Operations. The reach of the content spread by these accounts was less than one-tenth of a percent of the total reach of civic content on Facebook.

That may seem  like a totally bogus number — and it may well be! But to assess it, understand what they’re measuring.

That’s one of the laudable aspects of the report: it tries to break down the various parts of the process, distinguishing things like “disinformation” — inaccurate information spread intentionally — from “misinformation” — inaccurate information spread without malicious intent.

Information (or Influence) Operations – Actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome. These operations can use a combination of methods, such as false news, disinformation, or networks of fake accounts (false amplifiers) aimed at manipulating public opinion.

False News– News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information. Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.

Having thus defined those terms, Facebook distinguishes further between false news sent with malicious intent from that sent for other purposes — such as to make money. In this passage, Facebook also acknowledges the important detail for it: false news doesn’t work without amplification.

Intent: The purveyors of false news can be motivated by financial incentives, individual political motivations, attracting clicks, or all the above. False news can be shared with or without malicious intent. Information operations, however, are primarily motivated by political objectives and not financial benefit.

Medium: False news is primarily a phenomenon related to online news stories that purport to come from legitimate outlets. Information operations, however, often involve the broader information ecosystem, including old and new media.

Amplification: On its own, false news exists in a vacuum. With deliberately coordinated amplification through social networks, however, it can transform into information operations

So the stat above — the amazingly low .1% — is just a measure of the amplification of stories by Facebook accounts created for the purpose of maliciously amplifying certain fake stories; it doesn’t count the amplification of fake stories by people who believe them or who aren’t formally engaged in an information operation. Indeed, the report notes that after an entity amplifies something falsely, “organic proliferation of the messaging and data through authentic peer groups and networks [is] inevitable.” The .1% doesn’t count Trump’s amplification of stories (or of his followers).

Furthermore, the passage states it is measuring accounts that “reinforced or expanded on some of the topics exposed from stolen data,” which would seem to limit which fake stories it tracked, including things like PizzaGate (which derived in part from a Podesta email) but not the fake claim that the Pope endorsed Trump (though later on the report says it identifies false amplifiers by behavior, not by content).

The entire claim raises questions about how Facebook identifies which are the false amplifiers and which are the accounts “authentically” sharing false news. In a passage boasting of how it has already suspended 30,000 fake accounts in the context of the French election, the report includes an image that suggests part of what it does to identify the fake accounts is identifying clusters of like activity.

But in the US election section, the report includes a coy passage stating that it cannot definitively attribute who sponsored the false amplification, even while it states that its data does not contradict the Intelligence Community’s attribution of the effort to Russian intelligence.

Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.

That presents the possibility (one that is quite likely) that Facebook has far more specific forensic data on the .1% of accounts it deems malicious amplifiers that it coyly suggests it knows to be Russian intelligence. Note, too, that the report is quite clear that this is human-driven activity, not bot-driven.

So the .1% may be a self-serving number, based on a definition drawn so narrowly as to be able to claim that Russian spies spreading propaganda make up only a tiny percentage of activity within what it portrays as the greater vibrant civic world of Facebook.

Alternately, it’s a statement of just how powerful Facebook’s network effect is, such that a very small group of Russian spies working on Facebook can have an outsized influence.

 


BuzzFeed Now Looking to Institutional Dems to Police a Phantom Surge of Lefty Fake News

One of my many concerns about the fake fake news scare is that it provides a way to discredit alternative voices, as the PropOrNot effort tried to discredit a number of superb outlets that don’t happen to share PropOrNot’s Neocon approach to Syria. BuzzFeed, in its seemingly unquenchable desire to generate buzz by inflating the threat of fake news, takes that a step further by turning to institutional Democratic outlets — outlets whose credibility got damaged by Hillary’s catastrophic loss — to police an alleged surge of fake news on the left.

First, consider its evidence for a surge in Democrats embracing fake news.

There are new cases daily. Suspicions about his 2020 reelection filing. Theories about the “regime’s” plan for a “coup d’état against the United States” (complete with Day After Tomorrow imagery of New York City buried in snow). Stories based on an unverified Twitter account offering supposed “secrets” from “rogue” White House staffers (followed by more than 650,000 people). Even theories about the Twitter account (“Russian disinformation”).

Since the election, the debunking website Snopes has monitored a growing list of fake news articles aimed at liberals, shooting down stories about a new law to charge protesters with terrorism, a plan to turn the USS Enterprise into a floating casino, and a claim that Vice President Mike Pence put himself through gay conversion therapy.

[snip]

Panicky liberal memes have cascaded across the internet in recent weeks, like an Instagram post regarding Steve Bannon’s powers on the National Security Council shared by a celebrity stylist and actress. Some trolls have even found success making fake news specifically aimed at tricking conservatives.

Let’s take the purported “fake news” story BuzzFeed bases its argument on, one by one:

  • debunking of a Twitter thread (not a finished news piece) of the conclusions about a discovery that Trump, very unusually for a President, filed for reelection immediately after inauguration. There’s no debunking that Trump filed his candidacy, nor that it is unusual, nor, even, that Trump is fundraising off it. That’s not fake news. It’s an attempt to figure out why Trump is doing something unusual, with a fact-checking process happening in the Twitter discussion.
  • An admittedly overblown Medium post about some of the shady things Trump has done, as well as the much rumored claim that the reported sale of 19% of Rosneft confirms the Trump dossier claim that Carter Page would get part of Rosneft if he could arrange the lifting of US sanctions on Russia. The story’s treatment — and especially it’s use of the word “coup” — is silly, but the underlying question of whether Trump will instruct agencies to ignore the law, as already happened in limited form at Dulles over the first weekend of the Muslim ban, as well as the question of how Trump intends to target people of color, is a real one.
  • A story basically talking about the formation of the RoguePotusStaff Twitter account that notes prominently that “there’s no way to verify the authenticity of the newly minted Twitter channel.” BuzzFeed provided no evidence this was being preferentially shared by people on the left.
  • A Twitter thread speculating, based off linguistic analysis, that the RoguePotusStaff account might be Russian disinformation. Again, BuzzFeed made no claims about who was responding to this thread.
  • A debunking of a claim posted in November on a conservative fake news site claiming that protestors would get charged with terrorism.
  • A “debunking” of a satirical story from November posted in the Duffel Blog claiming Trump was going to repurpose an aircraft carrier.
  • A debunking of a fake news story from November claim that Mike Pence had put himself through gay conversion therapy that notes Pence did, indeed, push gay conversation therapy.
  • A liberal trolling effort aimed at conservatives, which started in December, claimed that Trump had removed symbols of Islam from the White House.
  • An instagram post that (BuzzFeed snottily notes) got shared by an actress and a stylist reporting the true fact that Bannon had been added to the National Security Council and noting the arguably true fact that the NSC reviews the kill list including the possibility of targeting Americans (technically, the targeted killing review team installed by Obama is not coincident with the NSC, but it does overlap significantly, and Anwar al-Awlaki was targeted by that process).

Most of these things are not news! Most are not pretending to be news! The only single thing included among BuzzFeed’s “proof” that lefties are resorting to fake news that would support that claim is the Mike Pence story. And to get there, BuzzFeed has to pretend that the Duffel Blog is not explicitly satire, that multiple cases of conservative fake news are lefty fake news, that well-considered discussions on Twitter are fake news, and that we all have to stop following RoguePotusStaff because we don’t know whether its writers are really Rogue POTUS staffers or not.

It’s a shoddy series of claims that BuzzFeed should be embarrassed about making. Effectively, it is calling discussion and satire — including correction — fake news.

To BuzzFeed’s credit, after months of mis-stating what a poll it did revealed — BuzzFeed had been claiming that 75% of people believe fake news, but in reality the poll showed that 75% of those who recall fake news believe it — BuzzFeed finally got that, at least, correct. Bravo BuzzFeed!

But other than that, they’ve got almost nothing here.

Believe it or not, that’s not the most offensive part of this story. Having invented a lefty fake news problem out of satire and Twitter discussions, BuzzFeed then decided it’s important what official Democratic sources thing about it. While one Bernie source said it was best to ignore these things (another said it was a real problem), BuzzFeed framed other responses in terms of left protests of elected officials.

Democratic operatives and staffers at left-leaning media outlets predict that viral anti-Trump conspiracy theories will ultimately distract from real reporting about the administration, undermining legitimate causes for outrage on the left over what the administration is actually doing.

Still, for now, it’s a conversation that exists almost entirely outside the political class itself. Elected officials are not hawking phony stories as true, like Trump’s calls to investigate widespread voter fraud during the election. But that remove poses its own problems for leaders with no obvious way to dismantle widely shared false stories.

“It exists on the left and that’s a problem because it misinforms people,” said Judd Legum, editor in chief of progressive news site ThinkProgress. “That’s harmful in other ways because the time you’re spending talking about that, you could spend talking about other stuff.”

“It contributes to a broader environment of distrust, and it sort of accelerates the post-factual nature of our times,” said Teddy Goff, co-founder of Precision Strategies and a former senior aide to Barack Obama and Hillary Clinton. “Fake news is pretty damaging no matter who it benefits politically. No one on the left should think we ought to be replicating the fake news tactics on the right.”

[snip]

The online energy also raises questions about the party’s relationship with its base. In recent weeks, progressives have pressured lawmakers to adopt a tougher stance toward Trump and join ranks with the millions of protesters who marched over inauguration weekend.

The two top-ranking Democrats in Washington, Chuck Schumer in the Senate and Nancy Pelosi in the House, have both signaled an openness to working on legislation with Trump. Last week, protests formed outside Schumer’s home in Brooklyn. And among progressive activists online, Pelosi was met with vehement push-back after saying the party has a “responsibility to the American people to find our common ground.”

“Elected Democrats are stuck struggling to keep ahead of the anger that the base is feeling right now,” said [Jim] Manley, the former Reid adviser. “It’s very palpable.”

First, BuzzFeed is wrong in saying elected officials are not hawking phony stories as true. One reason the claim that Wikileaks doctored Democratic emails got so much traction is because Dems repeatedly made that claim (and as I’ve noted, Hillary quickly escalated the Alfa News story that most media outlets rejected as problematic).

Worse, BuzzFeed deems Democratic operatives and staffers as somehow chosen to decide what are “legitimate causes for outrage on the left over what the administration is actually doing.” It further suggests there’s a connection between people protesting elected leaders and fake news.

Finally, BuzzFeed shows absolutely no self-awareness about the people it seeks about and the stories they’ve pitched. Consider: Manley is in the very immediate vicinity of the people who got the WaPo to push the claim that CIA had decided Russia hacked the DNC in order to get Trump elected, a conclusion that — we’ve subsequently learned — is the single one any agency in the IC (in this case, the NSA) expressed less confidence in. Moreover, we know that Harry Reid spent months trying to get the FBI to reveal details included in the Trump dossier that no one has been able to confirm. And when the dossier was released, Judd Legum magnified it himself, in much the same way the Medium post did the Rosneft claim.

Oh, and as a reminder: BuzzFeed was the entity that decided it was a good idea to publish an unverified intelligence dossier in the first place!

I mean, if the institutional Dems that BuzzFeed has deemed the arbiters of what is “legitimate” to talk about think the unproven Russian dossier counts, then BuzzFeed has even less in its claim about fake news.

Nevertheless, it thought it was a good idea to assign two journalists to make thinly substantiated claims about a lefty news problem that it then used to police whether lefty protestors are doing the right thing.


The Three Most Believed Fake News Stories of the Election (Tested by Stanford) Favored Hillary

In a piece repeating erroneous BuzzFeed reporting, the Atlantic expresses concern that the left is now sharing fake news stories just like the right shared them during the election.

If progressives are looking to be shocked, terrified, or incensed, they have plenty of options. Yet in the past two weeks, many have turned to a different avenue: They have shared “fake news,” online stories that look like real journalism but are full of fables and falsehoods.It’s a funny reversal of the situation from November. In the weeks after the election, the press chastised conservative Facebook users for sharing stories that had nothing to do with reality. Hundreds of thousands of people shared stories asserting incorrectly that President Obama had banned the pledge of allegiance in public schools, that Pope Francis had endorsed Donald Trump, and that Trump had dispatched his personal plane to save 200 starving marines.
The phenomenon seemed to confirm theorists’ worst fears about the internet. Given the choice, democratic citizens will not seek out news that challenges their beliefs;  instead, they will opt for content that confirms their suspicions. A BuzzFeed News investigation found that more people shared these fake stories than shared real news in the three months before the election. A follow-up survey suggested that most Americans believed fake news after seeing it on Facebook. When held to the laissez faire editorial standards of Facebook, the market of ideas fails.

As I laid out, BuzzFeed’s claim that most Americans believe fake news was not what BuzzFeed’s poll actually showed; rather, it showed that those who remember fake stories believe them, but that works out to be a small fraction of the people who see the story. And this piece is one of many that points out some methodological problems with BuzzFeed’s count of fake news sharing.

The Atlantic then goes onto cite stuff (like the @AltNatParSer and @RoguePOTUSStaff) that is not verified but might be true but in any case is critique as the left’s new habit of fake news.

All that said, the Atlantic is right that the left can be sucked in by not-true news — but that was true during the election, too. Consider this Stanford study that, generally, found that fake news wasn’t as impactful as often claimed.

We estimate that in order for fake news to have changed the election result, the average fake story would need to have f ≈ 0.0073, making it about as persuasive as 36 television campaign ads.

Buried deep inside the story is a detail one or two people have noted, but not mentioned prominently. Among the fake news stories studied by the authors (which were limited to stories debunked at places like Snopes, which is a significant limit to the study), two stories favorable to Hillary were the most believed.

Blue here is the percentage of the US adult population that believed a story and red is being “not sure.” Both if you aggregate those two categories and if you take only those who affirmatively say they believed something, this story — claiming Congressman Jeff Denham helped broker Trump’s deal for the Trump Hotel in DC — and this story — repeating Kurt Eichenwald’s claim that he had proof WikiLeaks led all the fake stories Stanford tested, with close to 30% definitely believing both (see my post on that story). This story claiming Clinton paid Beyonce for a campaign appearance was the most-believed anti-Hillary story, which came after a third Hillary-friendly story claiming Trump was going to deport Lin Manuel-Miranda (note, as also shown in other studies, the fake news stories weren’t recalled or believed at the same rates as the true ones, though in the aggregate, the Denham story rivaled “small true” stories).

Note, the Stanford study did not test this story, which also claimed Wikileaks had doctored emails. It appeared on the same Clinton site three days earlier, which was itself based off a fake news created by a Hillary supporter (with some spooky ties), and magnified by Malcolm Nance and Joy Reid. Those two stories likely reinforced each other.

I’m interested in both of these stories — in part, because the reality about Trump’s corruption and his ties to Russia are both bad enough, without Democratic operatives inventing stories about it. But obviously, I’m particularly interested in the latter, in part because so even in spite of the real evidence implicating Russia in the hack of the DNC, Democrats tend to believe anything involving Russia without evidence.

That’s ironic, given that the risk of fake news is supposed to stem from Putin poisoning our airwaves.

Update: I’ve added “three” to the title because a number of people said it would make it more clear. Thanks to those who suggested it.


The Latest “Fake” News Tizzy: Garbage In, Garbage Out

It seems that if you label something “fake news” and add some pretty charts, a certain class of people will share it around like others share pictures of big breasted women or stories accusing Hillary of murder.

Consider this study published at CJR, which purports to be the “calculated look at fake news’s reach” that has been “missing from the conversation” (suggesting its author may be unfamiliar with this Stanford study or even the latest BuzzFeed poll).

It has pretty pictures everyone is passing around on Twitter (the two charts look sort of like boobies, don’t you think?).

Wowee, that must be significant, huh? That nearly 30% of what it calls “fake news” traffic comes from Facebook, but only 8% of “real news” traffic does?

I’ve seen none of the people passing around these charts mention this editor’s note, which appears at the very end of the piece.

Editor’s note: We are reporting the study findings listed here as a service to our audience, but we do not endorse the study authors’ baseline assumptions about what constitutes a “fake news” outlet. Drudge Report, for instance, draws from and links to reports in dozens of credible, mainstream outlets. 

Drudge may be a lot of things (one of which is far older than the phenomenon that people like to label as fake news). But it’s not, actually, fake news.

Moreover, one of the points that becomes clear if you look at the pretty pictures in this study closely is that Drudge skews all the results, which is unsurprising given the traffic that it gets. That’s important for several reasons, one of which being that Drudge is in no way what most people consider fake news, and if it were, then the outlets that fall over themselves to get linked there would themselves be fake news and we could just agree that our journalism, more generally, is crap. More importantly, though, the centrality of the Drudge skew in the study — along with the editor’s note — ought to alert anyone giving the study a marginally critical review that the underlying categorization is seriously problematic.

To understand whether these charts mean anything, let’s consider how the author, Jacob Nelson, defines “fake news.”

As has become increasingly clear, “fake news” is neither straightforward nor easy to define. And so when we set out on this project, we referred to a list compiled by Melissa Zimdars, a media professor at Merrimack College in Massachusetts. The news sites on this list fall on a spectrum, which means that while some of the sites we examined* publish obviously inaccurate news (e.g., abcnews.com.co), others exist in a more ambiguous space, wherein they might publish some accurate information buried beneath misleading or distorted headlines (e.g., Drudge Report, Red State). Then there are intentionally satirical news sources, like The Onionand Clickhole. Our sample included examples of all of these types of fake news.

We also analyzed metrics for real news sites. That list represents a mix of 24 newspapers, broadcast, and digital-first publishers (e.g., Yahoo-ABC News, CNN, The New York Times, The Washington Post, Fox News, and BuzzFeed).

Nelson actually doesn’t link through to the list. He links through to a story on the list, which at one point had been removed from public view but which is now available. The professor’s valuation, by itself, has problems, not least that she judges these sites against what she claims is “traditional” journalism (itself a sign of either limited knowledge and/or bias about journalism as a whole).

But as becomes clear, you see that she has simply listed a bunch of sites and categorized them, including “political” and “credible.”

*Political (tag political): Sources that provide generally verifiable information in support of certain points of view or political orientations.  

*Credible (tag reliable): Sources that circulate news and information in a manner consistent with traditional and ethical practices in journalism (Remember: even credible sources sometimes rely on clickbait-style headlines or occasionally make mistakes. No news organization is perfect, which is why a healthy news diet consists of multiple sources of information).

[snip]

Note: Tags like political and credible are being used for two reasons: 1.) they were suggested by viewers of the document or OpenSources and circulate news 2.) the credibility of information and of organizations exists on a continuum, which this project aims to demonstrate. For now, mainstream news organizations are not included because they are well known to a vast majority of readers.

She actually includes (but misspells, with an initial cap) emptywheel in her list, which she considers “political” but not “credible.”

The point, however, is that sites will only be on this list if they are considered non-mainstream. And the list is in no way considered a list of “fake news,” but instead, a categorization of a bunch of kinds of news. In fact, the list doesn’t even consider Drudge “fake,” but instead qualifies it as “bias.”

So you’ve got the study using a list that is, itself, pretty problematic, which then treats the list as something that it’s not. Both levels of analysis, however, set up  false dichotomies (between “fake” and “real” in the study, and between “traditional” or “mainstream” and something else (which remains undefined but is probably something like “online outlets a biased media professor was not familiar with”) in the list. Ultimately, though, the study and the list end up distinguishing minor-web-based publications, of a range of types, from “major” (ignoring that Drudge and a number of other huge sites are in the list) outlets, some but not all of which actually engage in so-called “traditional” journalism.

You can see already why aiming to learn something about how people access these sites is effectively tautological, as the real distinction here is not between “fake” and “real” at all, but between “web-dependent” and something else.

Surprise, surprise, having substituted a web-based/other distinction for a “fake”/real one, you get charts that show the “fake” news (which is really web-based) rely more on web-based distribution methods.

Nope. Those charts don’t actually mean anything.

That doesn’t mean the rest of the study is worthless (though it is, repeatedly, prone to the kind of poor analysis exhibited by misunderstanding your basic data set). The study shows that not many people read “fake” news (excluding Drudge), and that people who read “fake” news (remember, this includes emptywheel!) also read “real” news.

Ultimately, the entire study is just a mess.

Which makes it a far more interesting example — as so much of the “fake news” panic does — of how precisely the people suggesting that “fake news” is a sign of poor critical thinking skills that will harm our democracy themselves also may lack critical thinking skills.

Update: This post raises some methodology questions about the way BuzzFeed defines “fake news.”


BuzzFeed Discovers We’re Not the Rubes It Has Claimed, But Insists We Still Have a Fake News Problem

Back in December, I called out BuzzFeed for a bogus news story about fake news. Based on a poll it commissioned, it claimed that 75% of people believe fake news. That’s not what the poll showed. Rather, it showed few people recalled headlines BuzzFeed had IDed as fake, but those who did believed it. It also showed that people recalled and believed “real” news more than they did fake.

[T]he poll showed that of the people who remember a given headline, 75% believed it. But only about 20% remembered any of these headlines (which had been shared months earlier). For example, 72% of the people who remembered the claim that an FBI Agent had been found dead believed it, but only 22% actually remembered it; so just 16% of those surveyed remembered and believed it. The recall rate is worse for the stories with higher belief rates. Just 12% of respondents remembered and believed the claim that Trump sent his own plane to rescue stranded marines. Just 8% remembered and believed the story that Jim Comey had a Trump sign in his front yard, and that made up just 123 people out of a sample of 1809 surveyed.

Furthermore, with just one exception, people recalled the real news stories tested more than they did the fake and with one laudable exception (that Trump would protect LGBTQ citizens; it is “true” that he said it but likely “false” that he means it), people believed real news at rates higher than they did fake. The most people — 22% — recalled the fake story about the FBI Agent, comparable to the 23% who believed some real story about girl-on-girl pictures involving Melania. But 34% remembered Trump would “absolutely” register Muslims and 57% remembered Trump’s claim he wasn’t going to take a salary.

BuzzFeed is back with another poll. Here’s what Craig Silverman claims this poll shows.

The online survey of 1,007 American adults found roughly the same percentage of American adults said they consumed news in the past the month on Facebook (55%) as on broadcast TV (56%). Those were by far the most popular sources of news, followed by print newspapers (39%), cable news (38%), “social media (generally)” (33%), and newspapers’ websites (33%).

But a significant gap emerged when people were asked how much they trust the news they get from these sources. Broadcast TV once again scored the highest, with 59% of respondents saying they trust news from that source all or most of the time.

In contrast, only 18% of respondents trust news on Facebook all or most of the time — and 44% said they rarely or almost never trust news on Facebook.

And unlike the last time Silverman read a poll, that is what the poll actually shows: people say they get “news” from Facebook but don’t really trust it, in contradistinction from the “news” they get from broadcast TV. In case you’re wondering (because BuzzFeed didn’t include this in its narrative of whether people read and trust news), 23% of people get “news” from online only publications like this one and like BuzzFeed; 35% trust it most or all of the time. Given how much BuzzFeed has been claiming that we have a “fake news crisis” driven by Facebook, you’d think they’d find low trust rates for Facebook to be great news. “Golly, people aren’t the rubes we’ve been getting clicks telling you they are, sorry.”

But BuzzFeed doesn’t do that. Instead, it returns to overstating what its last poll showed (though not quite as badly, this time), to suggest that people may believe Facebook, even if they don’t trust it .

But other research suggests that trust is not the same as belief — and that beliefs can be shaped even by distrusted sources. A recent online poll conducted by Ipsos for BuzzFeed News found that on average about 75% of American adults believed fake news headlines about the election when they recalled seeing them. The headlines tested were among those that received the highest overall engagement (shares, reactions, comments) on Facebook during the election, which means they received a large amount of exposure on the platform. A research paper published this week also found that just over half of people who recalled seeing viral fake election news headlines believed them to be true.

So while American adults say they don’t trust a lot of the news they see on Facebook, that apparently doesn’t stop many of them from believing it.

As a threshold matter, note that BuzzFeed’s pollster, Ipsos, appears not to have defined “news,” “trust,” or (the last time) “belief.” I guess it is unsurprising that someone claiming we have a “fake news” crisis believes there are essential terms, because fetishizing “news” is a key part of pitching “fake news” as something new.

But even though BuzzFeed introduced both apples and oranges in its bowl of fruit, I do find the numbers taken in conjunction instructive. Had BuzzFeed actually done the math I did last time — showing that recall rates for fake news are actually low so the poll did not, in fact, show that “many” people believe fake news — it might consider the possibility that when people read stuff on Facebook they don’t find to be credible, they don’t retain it. BuzzFeed measured different things, but the 18% of people who believe everything they read on Facebook is not far off from the 8-16% of people who remembered and believed fake news headlines from Twitter.

If you want a crisis, I’d say look to cable news, which is where 38% of people say they got their news, with half trusting it most or all the time. It appears more disinformation gets disseminated and retained that way than via Facebook. But as a very old problem, I guess that wouldn’t give BuzzFeed the same clicks as the Facebook fake news panic does.


On “Fake News”

I’ve been getting into multiple Twitter fights about the term “fake news” of late, a topic about which I feel strongly but which I don’t have time to reargue over and over. So here are the reasons I find the term “fake news” to be counterproductive, even aside from the way Washington Post magnified it with the PropOrNot campaign amidst a series of badly reported articles on Russia that failed WaPo’s own standards of “fake news.”

Most people who use the term “fake news” seem to be fetishizing something they call “news.” By that, they usually mean the pursuit of “the truth” within an editor-and-reporter system of “professional” news reporting. Even in 2017, they treat that term “news” as if it escapes all biases, with some still endorsing the idea that “objectivity” is the best route to “truth,” even in spite of the way “objectivity” has increasingly imposed a kind of both-sides false equivalence that the right has used to move the Overton window in recent years.

I’ve got news (heh) for you America. What we call “news” is one temporally and geographically contingent genre of what gets packaged as “news.” Much of the world doesn’t produce the kind of news we do, and for good parts of our own history, we didn’t either. Objectivity was invented as a marketing ploy. It is true that during a period of elite consensus, news that we treated as objective succeeded in creating a unifying national narrative of what most white people believed to be true, and that narrative was tremendously valuable to ensure the working of our democracy. But even there, “objectivity” had a way of enforcing centrism. It excluded most women and people of color and often excluded working class people. It excluded the “truth” of what the US did overseas. It thrived in a world of limited broadcast news outlets. In that sense, the golden age of objective news depended on a great deal of limits to the marketplace of ideas, largely chosen by the gatekeeping function of white male elitism.

And, probably starting at the moment Walter Cronkite figured out the Vietnam War was a big myth, that elite narrative started developing cracks.

But several things have disrupted what we fetishize as news since them. Importantly, news outlets started demanding major profits, which changed both the emphasis on reporting and the measure of success. Cable news, starting especially with Fox but definitely extending to MSNBC, aspired to achieve buzz, and even explicitly political outcomes, bringing US news much closer to what a lot of advanced democracies have — politicized news.

And all that’s before 2002, what I regard as a key year in this history. Not only was traditional news struggling in the face of heightened profit expectations even as the Internet undercut the press’ traditional revenue model. But at a time of crisis in the financial model of the “news,” the press catastrophically blew the Iraq War, and did so at a time when people like me were able to write “news” outside of the strictures of the reporter-and-editor arrangement.

I actually think, in an earlier era, the government would have been able to get away with its Iraq War lies, because there wouldn’t be outlets documenting the errors, and there wouldn’t have been ready alternatives to a model that proved susceptible to manipulation. There might eventually have been a Cronkite moment in the Iraq War, too, but it would have been about the conduct of the war, not also about the gaming of the “news” process to create the war. But because there was competition, we saw the Iraq War as a journalistic failure when we didn’t see earlier journalistic complicity in American foreign policy as such.

Since then, of course, the underlying market has continued to change. Optimistically, new outlets have arisen. Some of them — perhaps most notably HuffPo and BuzzFeed and Gawker before Peter Thiel killed it — have catered to the financial opportunities of the Internet, paying for real journalism in part with clickbait stories that draw traffic (which is just a different kind of subsidy than the family-owned project that traditional newspapers often relied on, and these outlets also rely on other subsidies). I’m pretty excited by some of the journalism BuzzFeed is doing right now, but it’s worth reflecting their very name nods to clickbait.

More importantly, the “center” of our national — indeed, global — discourse shifted from elite reporter-and-editor newspapers to social media, and various companies — almost entirely American — came to occupy dominant positions in that economy. That comes with the good and the bad. It permits the formulation of broader networks; it permits crisis on the other side of the globe to become news over here, in some but not all spaces, it permits women and people of color to engage on an equal footing with people previously deemed the elite (though very urgent digital divide issues still leave billions outside this discussion). It allows our spooks to access information that Russia needs to hack to get with a few clicks of a button. It also means the former elite narrative has to compete with other bubbles, most of which are not healthy and many of which are downright destructive. It fosters abuse.

But the really important thing is that the elite reporter-and-editor oligopoly was replaced with a marketplace driven by a perverse marriage of our human psychology and data manipulation (and often, secret algorithms). Even assuming net neutrality, most existing discourse exists in that marketplace. That reality has negative effects on everything, from financially strapped reporter-and-editor outlets increasingly chasing clicks to Macedonian teenagers inventing stories to make money to attention spans that no longer get trained for long reads and critical thinking.

The other thing to remember about this historical narrative is that there have always been stories pretending to present the real world that were not in fact the real world. Always. Always always always. Indeed, there are academic arguments that our concept of “fiction” actually arises out of a necessary legal classification for what gets published in the newspaper. “Facts” were insults of the king you could go to prison for. “Fiction” was stories about kings that weren’t true and therefore wouldn’t get you prison time (obviously, really authoritarian regimes don’t honor this distinction, which is an important lesson in their contingency). I have been told that fact/fiction moment didn’t happen in all countries, and it happened at different times in different countries (roughly tied, in my opinion, to the moment when the government had to sustain legitimacy via the press).

But even after that fact/fiction moment, you would always see factual stories intermingling with stuff so sensational that we would never regard it as true. But such sensational not-true stories definitely helped to sell newspapers. Most people don’t know this because we generally learn a story via which our fetishized objective news is the end result of a process of earlier news, but news outlets — at least in the absence of heavy state censorship — have always been very heterogeneous.

As many of you know, a big part of my dissertation covered actual fiction in newspapers. The Count of Monte-Cristo, for example, was published in France’s then equivalent of the WSJ. It wasn’t the only story about an all powerful figure with ties to Napoleon Bonaparte that delivered justice that appeared in newspapers of the day. Every newspaper offered competing versions, and those sold newspapers at a moment of increasing industrialization of the press in France. But even at a time when the “news” section of the newspaper presented largely curations of parliamentary debates, everything else ran the gamut from “fiction,” to sensational stuff (often reporting on technology or colonies), to columns to advertisements pretending to be news.

After 1848 and 1851, the literary establishment put out alarmed calls to discipline the literary sphere, which led to changes that made such narratives less accessible to the kind of people who might overthrow a king. That was the “fictional narrative” panic of the time, one justified by events of 1848.

Anyway, if you don’t believe me that there has always been fake news, just go to a checkout line and read the National Enquirer, which sometimes does cover people like Hillary Clinton or Angela Merkel. “But people know that’s fake news!” people say. Not all, and not all care. It turns out, some people like to consume fictional narratives (I have actually yet to see analysis of how many people don’t realize or care that today’s Internet fake news is not true). In fact, everyone likes to consume fictional narratives — it’s a fundamental part of what makes us human — but some of us believe there are norms about whether fictional narratives should be allowed to influence how we engage in politics.

Not that that has ever stopped people from letting religion — a largely fictional narrative — dictate political decisions.

So to sum up this part of my argument: First, the history of journalism is about the history of certain market conditions, conditions which always get at least influenced by the state, but which in so-called capitalist countries also tend to produce bottle necks of power. In the 50s, it was the elite. Now it’s Silicon Valley. And that’s true not just here! The bottle-neck of power for much of the world is Silicon Valley. To understand what dictates the kinds of stories you get from a particular media environment, you need to understand where the bottle-necks are. Today’s bottle-neck has created both what people like to call “fake news” and a whole bunch of other toxins.

But also, there has never been a time in media where not-true stories didn’t comingle with true stories, and at many times in history the lines between them were not clear to many consumers. Plus, not-true stories, of a variety of types, can often have a more powerful influence than true ones (think about how much our national security state likes series like 24). Humans are wired for narrative, not for true or false narrative.

Which brings us to what some people are calling “fake news” — as if both “fake” and “news” aren’t just contingent terms across the span of media — and insisting it has never existed before. These people suggest the advent of deliberately false narratives, produced both by partisans, entrepreneurs gaming ad networks, as well as state actors trying to influence our politics, narratives that feed on human proclivity for sensationalism (though stories from this year showed Trump supporters had more of this than Hillary supporters) served via the Internet, are a new and unique threat, and possibly the biggest threat in our media environment right now.

Let me make clear: I do think it’s a threat, especially in an era where local trusted news is largely defunct. I think it is especially concerning because powers of the far right are using it to great effect. But I think pretending this is a unique moment in history — aside from the characteristics of the marketplace — obscures the areas (aside from funding basic education and otherwise fostering critical thinking) that can most effectively combat it. I especially encourage doing what we can to disrupt the bottle-neck — one that happens to be in Silicon Valley — that plays on human nature. Google, Facebook, and Germany have all taken initial steps which may limit the toxins that get spread via a very American bottle-neck.

I’m actually more worried about the manipulation of which stories get fed by big data. Trump claims to have used it to drive down turnout; and the first he worked with is part of a larger information management company. The far right is probably achieving more with these tailored messages than Vladimir Putin is with his paid trolls.

The thing is: the antidote to both of these problems is to fix the bottle-neck.

But I also think that the most damaging non-true news story of the year was Bret Baier’s claim that Hillary was going to be indicted, as even after it was retracted it magnified the damage of Jim Comey’s interventions. I always raise that in Twitter debates, and people tell me oh that’s just bad journalism not fake news. It was a deliberate manipulation of the news delivery system (presumably by FBI Agents) in the same way the manipulation of Facebooks algorithms feeds so-called fake news. But it had more impact because more people saw it and people may retain news delivered as news more. It remains a cinch to manipulate the reporter-and-editor news process (particularly in an era driven by clicks and sensationalism and scoops), and that is at least as major a threat to democracy as non-elites consuming made up stories about the Pope.

I’ll add that there are special categories of non-factual news that deserve notice. Much stock market reporting, especially in the age of financialization, is just made up hocus pocus designed to keep the schlubs whom the elite profit off of in the market. And much reporting on our secret foreign policy deliberately reports stuff the reporter knows not to be true. David Sanger’s recent amnesia of his own reporting on StuxNet is a hilarious example of this, as is all the Syria reporting that pretends we haven’t intervened there. Frankly, even aside from the more famous failures, a lot of Russian coverage obscures reality, which discredits reports on what is a serious issue. I raise these special categories because they are the kind of non-true news that elites endorse, and as such don’t raise the alarm that Macedonian teenagers making a buck do.

The latest panic about “fake news” — Trump’s labeling of CNN and Buzzfeed as such for disseminating the dossier that media outlets chose not to disseminate during the election — suffers from some of the same characteristics, largely because parts of it remain shrouded in clandestine networks (and because the provenance remains unclear). If American power relies (as it increasingly does) on secrets and even outright lies, who’s to blame the proles for inventing their own narratives, just like the elite do?

Two final points.

First, underlying most of this argument is an argument about what happens when you subject the telling of true stories to certain conditions of capitalism. There is often a tension in this process, as capitalism may make “news” (and therefore full participation in democracy) available to more people, but to popularize that news, businesses do things that taint the elite’s idealized notion of what true story telling in a democracy should be. Furthermore, at no moment in history I’m aware of has there been a true “open” market for news. It is always limited by the scarcity of outlets and bandwidth, by laws, by media ownership patterns, and by the historically contingent bottle-necks that dictate what kind of news may be delivered most profitably. One reason I loathe the term “fake news” is because its users think the answer lies in non-elite consumers or in producers and not in the marketplace itself, a marketplace created in and largely still benefitting the US. If “fake news” is a problem, then it’s a condemnation of the marketplace of ideas largely created by the US and elites in the US need to attend to that.

Finally, one reason there is such a panic about “fake news” is because the western ideology of neoliberalism has failed. It has led to increased authoritarianism, decreased quality of life in developed countries (but not parts of Africa and other developing nations), and it has led to serial destabilizing wars along with the refugee crises that further destabilize Europe. It has failed in the same way that communism failed before it, but the elites backing it haven’t figured this out yet. I’ll write more on this (Ian Welsh has been doing good work here). All details of the media environment aside, this has disrupted the value-laden system in which “truth” exists, creating a great deal of panic and confusion among the elite that expects itself to lead the way out of this morass. Part of what we’re seeing in “fake news” panic stems from that, as well as a continued disinterest in accountability for the underlying policies — the Iraq War and the Wall Street crash and aftermath especially — enabled by failures in our elite media environment. But our media environment is likely to be contested until such time as a viable ideology forms to replace failed neoliberalism. Sadly, that ideology will be Trumpism unless the elite starts making the world a better place for average folks. Instead, the elite is policing discourse-making by claiming other things — the bad true and false narratives it, itself, doesn’t propagate — as illegitimate.

“Fake news” is a problem. But it is a minor problem compared to our other discursive problems.

Copyright © 2024 emptywheel. All rights reserved.
Originally Posted @ https://www.emptywheel.net/blogs-internet-and-new-media/page/5/