What Would It Take for the Government to Obtain Google’s Counter-Terror Ads Algos?

Some weeks ago, the government went to Silicon Valley to ask for new ways to counter ISIS’ propaganda. We’re now seeing the response to that request, with the report that Google will show positive ads when people search for extremist content.

In a new development, Google said it’s testing ways to counter extremist propaganda with positive messages on YouTube and in Google search results.

Google executive Anthony House told MPs that taking extremist videos down from YouTube isn’t enough, and people searching for that content should be presented with competing narratives:

We should get the bad stuff down, but it’s also extremely important that people are able to find good information, that when people are feeling isolated, that when they go online, they find a community of hope, not a community of harm.

There are two programs being tested by Google to make sure the positive messages are seen by people seeking out extremist content: one to make sure the “good” kind of videos are easily found on YouTube; and another to display positive messages when people search for extremist-related terms.

The second program involves giving grants to nonprofit organizations to use Google AdWords to display competing ads alongside the search results for those extremist-related terms.

If Google wants to do this, that’s fine.

But I’m wondering about the legal standard here. It’s unclear whether Google will only show these “positive” (whoever and however that gets defined) when people search for “extremist” content, or whether they’ll show Google ads to those whose email content reflects an interest in “extremist” material.

In both cases, however, Google will use material that counts as “content” to decide to show these ads.

And then what happens? That is, what happens to Google’s records determining that these users should get that content? Do the records, stripped of the content itself, count as a third party record that can be obtained with a subpoena? Or do they count as content?

Congress hasn’t passed legislation requiring tech companies to report their terrorist users. But does having Google use its algorithms to determine who is an extremist give the government a way to find out who Google thinks is an extremist?

Marcy has been blogging full time since 2007. She’s known for her live-blogging of the Scooter Libby trial, her discovery of the number of times Khalid Sheikh Mohammed was waterboarded, and generally for her weedy analysis of document dumps.

Marcy Wheeler is an independent journalist writing about national security and civil liberties. She writes as emptywheel at her eponymous blog, publishes at outlets including the Guardian, Salon, and the Progressive, and appears frequently on television and radio. She is the author of Anatomy of Deceit, a primer on the CIA leak investigation, and liveblogged the Scooter Libby trial.

Marcy has a PhD from the University of Michigan, where she researched the “feuilleton,” a short conversational newspaper form that has proven important in times of heightened censorship. Before and after her time in academics, Marcy provided documentation consulting for corporations in the auto, tech, and energy industries. She lives with her spouse and dog in Grand Rapids, MI.

4 replies
  1. bloopie2 says:

    No fair. No fair at all. My first thought was that the extremist “publishers” should sue Google for trampling on their rights. But then I realized that Google AdWords, an advertising platform that allows a sponsor to “purchase” keywords that trigger the appearance of the sponsor’s advertisement and link when a keyword is searched on Google, has basically been determined to be legal over the last few years’ worth of court decisions. For example, Jack Daniels triggers a user to view Jack Daniels’ advertisements of “Tennessee Fire” when she searches “Fireball” (a competing Sazerac brand). So, I guess that ploy won’t work. Sheesh. What’s an extremist to do? (On the other hand, the Government should be PAYING Google for the placement of this “happy” “advertising. Are they? No, really, are they?

  2. thatvisionthing says:

    But I’m wondering about the legal standard here. It’s unclear whether Google will only show these “positive” (whoever and however that gets defined) when people search for “extremist” content, or whether they’ll show Google ads to those whose email content reflects an interest in “extremist” material.

    See: “no reasonable expectation of privacy” re gmail (I searched on ixquick)

    The Guardian:

    ‘Not the worst thing Google does’

    Google’s ads use information gleaned from a user’s email combined with data from their Google profile as a whole, including search results, map requests and YouTube views, to display what it considers are relevant ads in the hope that the user is more likely to click on them and generate more advertising revenue for Google.

    Salon:

    On Wednesday it was revealed in the form of a legal filing, uncovered by Consumer Watchdog: Email users have “no reasonable expectation of privacy” for information passed through Google’s email server.

    The comment from Google’s lawyers came out in a class action lawsuit in June which the Internet leviathan is being challenged over Gmail’s feature for scanning emails to target ads. Plaintiffs claim that Google’s practice goes against wiretap laws, but the Google’s lawyers argued otherwise, stating:

    Just as a sender of a letter to a business colleague cannot be surprised that the recipient’s assistant opens the letter, people who use web-based email today cannot be surprised if their emails are processed by the recipient’s [email provider] in the course of delivery. Indeed, ‘a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.’”

    However, millions of Google users had not assumed that everyday their electronic communications were being systematically scanned and read. But Google’s attorneys went so far to say that this is simply “ordinary business practice.”

    http://www.theguardian.com/technology/2014/apr/15/gmail-scans-all-emails-new-google-terms-clarify

    http://www.salon.com/2013/08/14/gmail_users_have_no_reasonable_expectation_of_privacy/

  3. thatvisionthing says:

    “The Google business plan is the greatest threat to freedom on the internet, greater than the NSA, greater than any censor, because it’s directly making money from controlling what people know about. And that is a huge problem.” — Jaron Lanier, Backlight Talks — Amsterdam 2013-10-5

    He says it in a 15-minute youtube I downloaded last May (“Backlight Talks Jaron Lanier”?) but all I find now on youtube is (I think) the video in Dutch? (can’t see, sorry): https://www.youtube.com/watch?v=Oo1ZEu7kiBg

    It’s in the section “how did data change advertising” at about 4:30 minutes. — micromanagement of your options — “not behavior modification but behavior restriction” of what you’re allowed/desired to see and blinders on the other stuff.

    Like, is that youtube really down now, or am I not allowed to see it, I wonder? Have looked through first three pages of results and don’t find it. Now why would google…

  4. haarmeyer says:

    But does having Google use its algorithms to determine who is an extremist give the government a way to find out who Google thinks is an extremist?

    One would hope so as an inevitable consequence, I think. Because if that were an inevitable outcome, the boundaries the government must adhere to because of the 4th Amendment should make Google’s behavior illegal finally.

Comments are closed.