Look Closer to Home: Russian Propaganda Depends on the American Structure of Social Media

The State Department’s Undersecretary for Public Diplomacy, Richard Stengel, wanted to talk about his efforts to counter Russian propaganda. So he called up David Ignatius, long a key cut-out the spook world uses to air their propaganda. Here’s how the column that resulted starts:

“In a global information war, how does the truth win?”

The very idea that the truth won’t be triumphant would, until recently, have been heresy to Stengel, a former managing editor of Time magazine. But in the nearly three years since he joined the State Department, Stengel has seen the rise of what he calls a “post-truth” world, where the facts are sometimes overwhelmed by propaganda from Russia and the Islamic State.

“We like to think that truth has to battle itself out in the marketplace of ideas. Well, it may be losing in that marketplace today,” Stengel warned in an interview. “Simply having fact-based messaging is not sufficient to win the information war.”

It troubles me that the former managing editor of Time either believes that the “post-truth” world just started in the last three years or that he never noticed it while at Time. I suppose that could explain a lot about the failures of both our “public diplomacy” efforts and traditional media.

Note that Stengel sees the propaganda war as a battle in the “marketplace of ideas.”

It’s not until 10 paragraphs later — after Stengel and Ignatius air the opinion that “social media give[s] everyone the opportunity to construct their own narrative of reality” and a whole bunch of inflamed claims about Russian propaganda — that Ignatius turns to the arbiters of that marketplace: the almost entirely US-based companies that provide the infrastructure of this “marketplace of ideas.” Even there, Ignatius doesn’t explicitly consider what it means that these are American companies.

The best hope may be the global companies that have created the social-media platforms. “They see this information war as an existential threat,” says Stengel. The tech companies have made a start: He says Twitter has removed more than 400,000 accounts, and YouTube daily deletes extremist videos.

The real challenge for global tech giants is to restore the currency of truth. Perhaps “machine learning” can identify falsehoods and expose every argument that uses them. Perhaps someday, a human-machine process will create what Stengel describes as a “global ombudsman for information.”

Watch this progression very closely: Stengel claims social media companies see this war as an existential threat. He then points to efforts — demanded by the US government under threat of legislation, though that goes unmentioned — that social media accounts remove “extremist videos,” with extremist videos generally defined as Islamic terrorist videos. Finally, Stengel puts hope on a machine learning global ombud for information to solve this problem.

Stengel’s description of the problem reflects several misunderstandings.

First, the social media companies don’t see this as an existential threat (though they may see government regulation as such). Even after Mark Zuckerberg got pressured into taking some steps to stem the fake news that had been key in this election — spread by Russia, right wing political parties, Macedonian teenagers, and US-based satirists — he sure didn’t sound like he saw any existential threat.

After the election, many people are asking whether fake news contributed to the result, and what our responsibility is to prevent fake news from spreading. These are very important questions and I care deeply about getting them right. I want to do my best to explain what we know here.

Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.

[snip]

This has been a historic election and it has been very painful for many people. Still, I think it’s important to try to understand the perspective of people on the other side. In my experience, people are good, and even if you may not feel that way today, believing in people leads to better results over the long term.

And that’s before you consider reports that Facebook delayed efforts to deal with this problem for fear of offending conservatives, or the way Zuckerberg’s posts seem to have been disappearing and reappearing like a magician’s bunny.

Stengel then turns to efforts to target two subsets of problematic content on social media: terrorism videos (definedin a way that did little or nothing to combat other kinds of hate speech) and fake news.

The problem with this whack-a-mole approach to social media toxins is that it ignores the underlying wiring, both of social media and of the people using social media. The problem seems to have more to do with how social media magnifies normal characteristics of humans and their tribalism.

[T]wo factors—the way that anger can spread over Facebook’s social networks, and how those networks can make individuals’ political identity more central to who they are—likely explain Facebook users’ inaccurate beliefs more effectively than the so-called filter bubble.

If this is true, then we have a serious challenge ahead of us. Facebook will likely be convinced to change its filtering algorithm to prioritize more accurate information. Google has already undertaken a similar endeavor. And recent reports suggest that Facebook may be taking the problem more seriously than Zuckerberg’s comments suggest.

But this does nothing to address the underlying forces that propagate and reinforce false information: emotions and the people in your social networks. Nor is it obvious that these characteristics of Facebook can or should be “corrected.” A social network devoid of emotion seems like a contradiction, and policing who individuals interact with is not something that our society should embrace.

And if that’s right — which would explain why fake or inflammatory news would be uniquely profitable for people who have no ideological stake in the outcome — then the wiring of social media needs to be changed at a far more basic level to neutralize the toxins (or social media consumers have to become far more savvy in a vacuum of training to get them to do so).

But let’s take a step back to the way Ignatius and Stengel define this. The entities struggling with social media include more than the US, with its efforts to combat Islamic terrorist and Russian propagandist content it hates. It includes authoritarian regimes that want to police content (America’s effort to combat content it hates in whack-a-mole fashion will only serve to legitimize those efforts). It also includes European countries, which hate Russian propaganda, but which also hate social media companies’ approach to filtering and data collection more generally.

European bureaucrats and activists, to just give one example, think social media’s refusal to stop hate speech is irresponsible. They see hate speech as a toxin just as much as Islamic terrorism or Russian propaganda. But the US, which is uniquely situated to pressure the US-based social media companies facilitating the spread of hate speech around the world, doesn’t much give a damn.

European bureaucrats and activists also think social media collect far too much information on its users; that information is one of the things that helps social media better serve users’ tribal instincts.

European bureaucrats also think American tech companies serve as a dangerous gateway monopolizing access to information. The dominance of Google’s ad network has been key to monetizing fake and other inflammatory news (though they started, post-election, to crack down on fake news sites advertising through Google).

The point is, if we’re going to talk about the toxins that poison the world via social media, we ought to consider the ways in which social media — enabled by characteristics of America’s regulatory regime — is structured to deliver toxins.

It may well be that the problem behind America’s failures to compete in the “marketplace of ideas” has everything to do with how America has fostered a certain kind of marketplace of ideas.

The anti-Russian crusade keeps warning that Russian propaganda might undermine our own democracy. But there’s a lot of reason to believe red-blooded American social media — the specific characteristics of the global marketplace of ideas created in Silicon Valley — is what has actually done that.

Update: In the UK, Labour’s Shadow Culture Secretary, Tom Watson, is starting a well-constructed inquiry into fake news. One question he asks is the role of Twitter and Facebook.

Update: Here’s a summary of fake news around the world, some of it quite serious, though without a systematic look at Facebook’s role in it.

WaPo Cleans Up a False Michael McFaul Allegation about RT

As I noted in my last post, I’m going to do some posts on the whackjob article WaPo published over the weekend, magnifying the assertions of some researchers (one group of which remain anonymous) alleging that outlets like Naked Capitalism are really Russian propaganda outlets.

In this post, I want to look at a correction the WaPo made after it was posted for a day. The original story featured this claim from former US Ambassador to Russia Michael McFaul.

A former U.S. ambassador to Russia, Michael A. McFaul, said he was struck by the overt support that RT and Sputnik expressed for Trump during the campaign, even using the #CrookedHillary hashtag pushed by the candidate.

In the interim, RT appears to have contacted WaPo,refuting the claims in the article (many of the other outlets claimed to be Russian propaganda outlets have yet to be contacted by the WaPo). A paragraph has been added, incorporating a statement from RT’s head of communications.

Now the McFaul claim looks like this:

A former U.S. ambassador to Russia, Michael A. McFaul, said he was struck by the overt support that Sputnik expressed for Trump during the campaign, even using the #CrookedHillary hashtag pushed by the candidate.

And the article includes this correction:

Correction: A previously published version of this story incorrectly stated that Russian information service RT had used the “#CrookedHillary” hastag [sic] pushed by then-Republican candidate Donald Trump. In fact, while another Russian information service Sputnik did use this hashtag, RT did not.

The article itself didn’t state that. McFaul did. The article simply paraphrased his claim.

Note, it appears people responding to RT have used the hashtag, which might be easy to confuse if you didn’t look too closely. But then, so do people responding to WaPo tweets.

A proper correction would instead say something like this:

A leading expert on Russia, former Ambassador to Russia and current Stanford University Political Science professor Michael McFaul, claimed that both RT and Sputnik have used the #CrookedHillary hashtag. When we fact checked his claim after publication and after RT refuted the claim, we found the claim to be false, with respect to RT and have altered his reported claim accordingly.

Of course, that would entail admitting that some of the most celebrated experts on Russia — to say nothing of the ones at PropOrNot hiding behind anonymity — get sloppy with their accusations. WaPo chose not to do that though, instead suggesting they, not their chosen expert, had made the error.

image_print