Posts

Facebook, Hot Seat, Day Two — House Energy & Commerce Committee Hearing

This is a dedicated post to capture your comments about Facebook CEO Mark Zuckerberg’s testimony before the House Energy & Commerce Committee today.

After these two hearings my head is swimming with Facebook content, so much so that I had a nightmare about it overnight. Today’s hearing combined with the plethora of reporting across the internet is only making things more difficult for me to pull together a coherent narrative.

Instead, I’m going to dump some things here as food for further consideration and maybe a possible future post. I’ll update periodically throughout the day. Do share your own feedback in comments.

Artificial Intelligence (AI) — every time Mark Zuckerberg brings up AI, he does so about a task he does not want to employ humans to do. Zuckerberg doesn’t want to hire humans even if it means doing the right thing. There are so many indirect references to creating automated tools that are all substitutions for labor that it’s obvious Facebook is in part what it is today because Facebook would rather make profits than hire humans until it is forced to do otherwise.

Users’ control of their data — this is bullshit whenever he says it. If any other entity can collect or copy or see users’ data without explicit and granular authorization, users do not have control of their data. Why simple controls like granular read/not-read settings on users’ data operated by users has yet to be developed and implemented is beyond me; it’s not as if Facebook doesn’t have the money and clout to make this happen.

Zuckerberg is also evasive about following Facebook users and nonusers across the internet — does browsing non-Facebook website content with an embedded Facebook link allow tracking of persons who visit that website? It’s not clear from Zuckerberg’s statements.

Audio tracking — It’s a good thing that Congress has brought up the issue of “coincident” content appearing after users discuss topics within audible range of a mobile device. Rep. Larry Buschon (R-Indiana) in particular offered pointed examples; we should remain skeptical of any explanation received so far because there are too many anedotes of audio tracking in spite of Zuckerberg’s denials.

Opioid and other illegal ads — Zuckerberg insists that if users flag them, ads will be reviewed and then taken down. Congress is annoyed the ads still exist. But at the hear of this exchange is Facebook’s reliance on users performing labor Facebook refuses to hire to achieve the expected removal of ads. Meanwhile, Congress refuses to do its own job to increase regulations on opioids, choosing instead to flog Facebook because it’s easier than going after donors like Big Pharma.

Verification of ad buyers — Ad buyers’ legitimacy based on verification of identity and physical location will be implemented for this midterm election cycle, Zuckerberg told Congress. Good luck with that when Facebook has yet to hire enough people to take down opioid ads or remove false accounts of public officials or celebrities.

First Amendment protections for content — Congressional GOP is beating on Facebook for what it perceives as consistent suppression of conservative content. This is a disinfo/misinfo operation happening right under our noses and Facebook will cave just like it did in 2016 while news media look the other way since the material in question isn’t theirs. Facebook, however, has suppressed neutral to liberal content frequently — like content about and images featuring women breastfeeding their infants — and Congress isn’t uttering a peep about this. Congress also isn’t asking any questions about Facebook’s assessments of content

Connecting the world — Zuckerberg’s personal desire to connect humans is supreme over the nature and intent of the connections. The ability to connect militant racists, for example, takes supremacy (literally) over protecting minority group members from persecution. And Congress doesn’t appear willing to see this as problematic unless it violates existing laws like the Fair Housing Act.

More to come as I think of it. Comment away.

UPDATE — 2:45 PM EDT — I’m gritting my teeth so hard as I listen to this hearing that I’ve given myself a headache.

Terrorist content — Rep. Susan Brooks (R-Indiana) asked about Facebook’s handling of ISIS content, to which Zuckerberg said a team of 200 employees focus on counterintelligence to remove ISIS and other terrorist content, capturing 99% of materials before they can be see by the public. Brooks further asked what Facebook is doing about stopping recruitment.

What. The. Fuck? We’re expecting a publicly-held corporation to do counterintelligence work INCLUDING halting recruitment?

Hate speech — Zuckerberg used the word “nuanced” to describe the definition while under pressure by left and right. Oh, right, uh-huh, there’s never been a court case in which hate speech has been defined…*head desk*

Whataboutism — Again, from Michigan GOPr Tim Walberg, pointing to the 2012 Obama campaign…every time the 2012 campaign comes up, you know you are listening to 1) a member of Congress who doesn’t understand Facebook’s use and 2) is working on furthering the disinfo/misinfo campaign to ensure the public thinks Facebook is biased against the GOP.

It doesn’t help that Facebook’s AI has failed on screening GOP content; why candidates aren’t contacting a human-staffed department directly is beyond me. Or why AI doesn’t interact directly with campaign/candidate users at the point of data entry to let them know what content is problematic so it can be tweaked immediately.

Again, implication of discrimination against conservatives and Christians on Facebook — Thanks, Rep. Jeff Duncan, waving your copy of the Constitution insisting the First Amendment is applied equally and fairly. EXCEPT you’ve missed the part where it says CONGRESS SHALL MAKE NO LAW respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press…

The lack of complaints by Democratic and Independent representatives about suppression of content should NOT be taken to mean it hasn’t happened. That Facebook allowed identified GOP-voting employees to work with Brad Parscale means that suppression happens in subtle ways. There’s also a different understanding between right and left wings about Congress’ limitation under the First Amendment AND Democrats/Independents aren’t trying to use these hearings as agitprop.

Internet service — CONGRESS NEEDS TO STOP ASKING FACEBOOK TO HELP FILL IN THE GAPS BETWEEN NETWORKS AND INTERNET SERVICE PROVIDERS THEY HAVE FAILED TO REGULATE TO ENSURE BROADBAND EVERYWHERE. Jesus Christ this bugs the shit out of me. Just stop asking a corporation to do your goddamned jobs; telcos have near monopoly ensured by Congress and aren’t acting in the best interest of the public but their shareholders. Facebook will do the same thing — serve shareholders but not the public interest. REGULATE THE GAP, SLACKERS.

3:00 PM thank heavens this beating is over.

Three more thoughts:

1) Facial recognition technology — non-users should NEVER become subjected to this technology, EVER. Facebook users should have extremely simple and clear opt-in/opt-out on facial technology.

2) Medical technology — absolutely not ever in social media. No. If a company is not in the business of providing health care, they have no business collecting health care data. Period.

3) Application approval — Ask Apple how to do it. They do it, app by app. Facebook is what happens when apps aren’t approved first.

UPDATE — 9:00 PM EDT — Based on a question below from commenter Mary McCurnin about HIPAA, I am copying my reply here to flesh out my concerns about Facebook and medical data collection and sharing:

HIPAA regulates health data sharing between “covered entities,” meaning health care clearinghouses, employer-sponsored health plans, health insurers, and medical service providers. Facebook had secretly assigned a doctor to work on promoting a proposal to some specific covered entities to work on a test or beta; the program has now been suspended. The fact this project was secret and intended to operate under a signed agreement rather than attempting to set up a walled-off Facebook subsidiary to work within the existing law tells me that Facebook didn’t have any intention of operating within HIPAA. The hashing concept proposed for early work but still relying on actual user data is absurdly arrogant in its blow off of HIPAA.

Just as disturbing: virtually nothing in the way of questions from Congress about this once-secret program. The premise which is little more than a normalized form of surveillance using users’ health as a criteria is absolutely unacceptable.

I don’t believe ANY social media platform should be in the health care data business. The breach of U.S. Office of Personnel Management should have given enough Congress enough to ponder about the intelligence risks from employment records exposed to foreign entities; imagine the risks if health care data was included with OPM employment information. Now imagine that at scale across the U.S., how many people would be vulnerable in so many ways if their health care information became exposed along with their social records.

Don’t even start with how great it would be to dispatch health care to people in need; we can’t muster the political will to pay for health care for everybody. Why provide monitoring at scale through social media when covered entities can do it for their subscriber base separately, and apparently with fewer data breaches?

You want a place to start regulating social media platforms? Start there: no health care data to mingle with social media data. Absolutely not, hell to the no.

Yet More Proof Facebook’s Surveillance Capitalism Is Good at Surveilling — Even Russian Hackers

I’ve long tracked Facebook’s serial admission to having SIGINT visibility that nearly rivals the NSA: knowing that Facebook had intelligence corroborating NSA’s judgment that GRU was behind the DNC hack was one reason I was ultimately convinced of the IC’s claims, in spite of initial questions.

Among all his evasions and questionably correct answers in Senate testimony yesterday, Mark Zuckerberg provided another tidbit about the visibility Facebook had on the 2016 attacks.

One of my greatest regrets in running the company is that we were slow in identifying the Russian information operations in 2016. We expected them to do a number of more traditional cyberattacks, which we did identify, and notified the campaigns, that they were trying to hack into them. But we were slow to identifying [sic] the type of new information operations.

Not only did Facebook see GRU’s operations in real time, but they notified “the campaigns” about them.

Note, Zuck didn’t describe the targets in any more detail than “campaigns.” That led Robby Mook to dispute Zuck, eliciting more details from Facebook CISO Alex Stamos.

Aside from illustrating how routinely those involved in and covering the 2016 hacks confuse the possible affected targets (resulting in some real misunderstanding of what happened), Stamos’ clarification provides important new details: these hacks affected both the DNC and RNC’s key employees, and Facebook alerted the FBI (something we’ve previously heard).

The DNC likes to claim they never got any warning they were being hacked. But apparently, in addition to the FBI’s serial attempts to lead them to discover Russia was hacking them, Facebook let them know too.

Elsewhere in his testimony, Zuck got coy about the degree to which Facebook remains involved in the Mueller investigation, a fact that should have been obvious to anyone who has read the Internet Research Agency indictment, but which numerous news outlets treated as news anyway.

Facebook has a lot to answer for (this David Dayen piece on yesterday’s testimony is superb).

But one thing that has continued to trickle out is that Facebook’s surveillance capitalism is good at what it’s designed for: surveillance, including of Russian hackers.

Facebook on the Hot Seat Before Senate Judiciary Committee

This is a dedicated post to capture your comments about Facebook CEO Mark Zuckerberg’s testimony before the Senate Judiciary Committee this afternoon. At the time of this post Zuckerberg has already been on the hot seat for more than two hours and another two hours is anticipated.

Before this hearing today I have already begun to think Facebook’s oligopolic position and its decade-plus inability to effectively police its operation requires a different approach than merely increasing regulation. While Facebook isn’t the only corporation monetizing users’ data as its core business model, its platform has become so ubiquitous that it is difficult to make use of a broad swath of online services without a Facebook login (or one of a very small number of competing platforms like Google or Twitter).

If Facebook’s core mission is connecting people with a positive experience, it should be regulated like a telecommunications provider — they, too, are connectors — or it should be taken public like the U.S. Postal Service. USPS, after all, is about connecting individual and corporate users by mediating exchange of analog data.

The EU’s General Data Protection Regulation (GDPR) offers a potential starting point as a model for the U.S. to regulate Facebook and other social media platforms. GDPR will shape both users’ expectations and Facebook’s service whether the U.S. is on board or not; we ought to look at GDPR as a baseline for this reason, while compliant with the First Amendment and existing data regulations like the Computer Fraud and Abuse Act (CFAA).

What aggravates me as I watch this hearing is Zuckerberg’s obvious inability to grasp nuance, whether divisions in political ideology or the fuzzy line between businesses’ interests and users’ rights. I don’t know if regulation will be enough if Facebook (manifest in Zuckerberg’s attitude) can’t fully and willingly comply with the Federal Trade Commission’s 2011 consent decree protecting users’ privacy. It’s possible fines for violations of this consent decree arising from the Cambridge Analytica/SCL abuse of users’ data might substantively damage Facebook; will we end up “owning” Facebook before we can even regulate it?

Have at it in comments.

UPDATE — 6:00 PM EDT — One of my senators, Gary Peters, just asked Zuck about audio capture, whether Facebook uses audio technology to listen to users in order to place ads relevant to users’ conversational topics. Zuck says no, which is really odd given the number of anecdotes floating around about ads popping up related to topics of conversation.

It strikes me this is one of the key problems with regulating social media: we are dealing with a technology which has outstripped its users AND its developers, evident in the inability to discuss Facebook’s operations with real fluency on either the part of government or its progenitor.

This is the real danger of artificial intelligence (AI) used to “fix” Facebook’s shortcomings; not only does Facebook not understand how its app is being abused, it can’t assure the public it can prevent AI from being flawed or itself being abused because Facebook is not in absolute control of its platform.

Zuckerberg called the Russian influence operation an ongoing “arms race.” Yeah — imagine arms made and sold by a weapons purveyor who has serious limitations understanding their own weapons. Gods help us.

EDIT — 7:32 PM EDT — Committee is trying to wrap up, Grassley is droning on in old-man-ese about defending free speech but implying at the same time Facebook needs to help salvage Congress’ public image. What a dumpster fire.

Future shock. Our entire society is suffering from future shock, unable to grasp the technology it relies on every day. Even the guy who launched Facebook can’t say with absolute certainty how his platform operates. He can point to the users’ Terms of Service but he can’t say how any user or the government can be absolutely certain users’ data is fully deleted if it goes overseas.

And conservatives aren’t going to like this one bit, but they are worst off as a whole. They are older on average, including in Congress, and they struggle with usage let alone implications and the fundamentals of social media technology itself. They haven’t moved fast enough from now-deceased Alaska Senator Ted Steven’s understanding of the internet as a “series of tubes.”

Cambridge Analytica Uncovered and More to Come

A little recap of events overnight while we wait for Channel 4’s next video. Channel 4 had already posted a video on March 17 which you can see here:

Very much worth watching — listen carefully to whistleblower Chris Wylie explain what data was used and how it was used. I can’t emphasize enough the problem of non-consensual use; if you didn’t explicitly consent but a friend did, they still swept up your data

David Carroll of Parsons School of Design (@profcarroll) offered a short and sweet synopsis last evening of the fallout after UK’s Channel 4 aired the first video of Cambridge Analytica Uncovered.

Facebook CTO Alex Stamos had a disagreement with management about the company’s handling of crisis; first reports said he had resigned. Stamos tweeted later, explaining:

“Despite the rumors, I’m still fully engaged with my work at Facebook. It’s true that my role did change. I’m currently spending more time exploring emerging security risks and working on election security.”

Other reports say Stamos is leaving in August. Both could be true: his job has changed and he’s eventually leaving.

I’m betting we will hear from him before Congress soon, whatever the truth.

Speaking of Congress, Sen. Ron Wyden has asked Mark Zuckerberg to provide a lot of information pronto to staffer Chris Sogohian. This ought to be a lot of fun.

A Facebook whistleblower has now come forward; Sandy Parkilas said covert harvesting of users’ data happened frequently, and Facebook could have done something about it.

Perhaps we ought to talk about nationalization of a citizens’ database?

Facebook Anonymously Admits It IDed Guccifer 2.0 in Real Time

The headline of this story focuses on how Obama, in the weeks after the election, nine days before the White House declared the election, “free and fair from a cybersecurity perspective,” begged Mark Zuckerberg to take the threat of fake news seriously.

Now huddled in a private room on the sidelines of a meeting of world leaders in Lima, Peru, two months before Trump’s inauguration, Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously. Unless Facebook and the government did more to address the threat, Obama warned, it would only get worse in the next presidential race.

But 26 paragraphs later, WaPo reveals a detail that should totally change the spin of the article: in June, Facebook not only detected APT 28’s involvement in the operation (which I heard at the time), but also informed the FBI about it (which, along with the further details, I didn’t).

It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.

At the time, cybersecurity experts at the company were tracking a Russian hacker group known as APT28, or Fancy Bear, which U.S. intelligence officials considered an arm of the Russian military intelligence service, the GRU, according to people familiar with Facebook’s activities.

Members of the Russian hacker group were best known for stealing military plans and data from political targets, so the security experts assumed that they were planning some sort of espionage operation — not a far-reaching disinformation campaign designed to shape the outcome of the U.S. presidential race.

Facebook executives shared with the FBI their suspicions that a Russian espionage operation was in the works, a person familiar with the matter said. An FBI spokesperson had no immediate comment.

Soon thereafter, Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.

Like the U.S. government, Facebook didn’t foresee the wave of disinformation that was coming and the political pressure that followed. The company then grappled with a series of hard choices designed to shore up its own systems without impinging on free discourse for its users around the world. [my emphasis]

But the story doesn’t provide the details you would expect from such disclosures.

For example, where did Facebook see Guccifer 2.0? Did Guccifer 2.0 try to set up a Facebook account? Or, as sounds more likely given the description, did he/they use Facebook as a signup for the WordPress site?

More significantly, what did Facebook do with the DC Leaks account, described explicitly?

It seems Facebook identified, and — at least in the case of the DC Leaks case — shut down an APT 28 attempt to use its infrastructure. And it told FBI about it, at a time when the DNC was withholding its server from the FBI.

This puts this passage from Facebook’s April report, which I’ve pointed to repeatedly, in very different context.

Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.

In other words, Facebook had reached this conclusion back in June 2016, and told FBI about it, twice.

And then what happened?

Again, I’m sympathetic to the urge to blame Facebook for this election. But this article describes Facebook’s heavy handed efforts to serve as a wing of the government to police terrorist content, without revealing that sometimes Facebook has erred in censoring content that shouldn’t have been. Then, it reveals Facebook reported Guccifer 2.0 and DC Leaks to FBI, twice, with no further description of what FBI did with those leads.

Yet from all that, it headlines Facebook’s insufficient efforts to track down other abuses of the platform.

I’m not sure what the answer is. But it sounds like Facebook was more forthcoming with the FBI about APT 28’s efforts than the DNC was.

Amid Promises to Share Ads with Congress, Some Other Interesting Promises

DC is atwitter with Facebook’s announcement that it can, after all, voluntarily share the same information it shared with Robert Mueller with Congress. As part of that announcement, it released a statement from their General Counsel, a Q&A addressing some of the questions that had been generating bad PR, and some promises of additional things Facebook will do to support democracy from Mark Zuckerberg.

I’m most interested in two details in Zuck’s statement. For example, this paragraph says Facebook will continue to look at what happened closely.

 We will continue our investigation into what happened on Facebook in this election. We may find more, and if we do, we will continue to work with the government. We are looking into foreign actors, including additional Russian groups and other former Soviet states, as well as organizations like the campaigns, to further our understanding of how they used our tools. These investigations will take some time, but we will continue our thorough review. [my emphasis]

While the frenzy responding to this announcement has focused on Russian ads, Zuck just revealed that Facebook is also looking at what the campaigns did.

That would permit Facebook to look for any apparently similar activity from campaigns and Russian actors, as we have reason to believe there was. It also might suggest Facebook is reviewing to see whether Republican dark marketing served to suppress turnout, and if so in coordination with what other actors.

I’d really love to have this information, but note that it is a substantially different thing for Facebook to review Russian actions and for Facebook to review Democratic or Republican actions.

Then there’s the promise to work even more closely with other tech companies.

We will increase sharing of threat information with other tech and security companies. We already share information on bad actors on the internet through programs like ThreatExchange, and now we’re exploring ways we can share more information about anyone attempting to interfere with elections. It is important that tech companies collaborate on this because it’s almost certain that any actor trying to misuse Facebook will also be trying to abuse other internet platforms too.

I think I’m okay with this (and they’re legally permitted to do this in any case). But given my newfound obsession with the fact that with any of these global tech companies, you’re dealing with intelligence resources that might rival nation-state intelligence, I’m interested in Facebook’s efforts to expand the sharing.

Facebook, by itself, may not rival the NSA. But when you put together Facebook, Microsoft, Google, Twitter, and others, then you’re beginning to talk really powerful intelligence capabilities.

It’s good, I suppose, that that much technical power is going to hunt down Russians. But it might be worth pausing to imagine what else they might cooperate to hunt down.

Look Closer to Home: Russian Propaganda Depends on the American Structure of Social Media

The State Department’s Undersecretary for Public Diplomacy, Richard Stengel, wanted to talk about his efforts to counter Russian propaganda. So he called up David Ignatius, long a key cut-out the spook world uses to air their propaganda. Here’s how the column that resulted starts:

“In a global information war, how does the truth win?”

The very idea that the truth won’t be triumphant would, until recently, have been heresy to Stengel, a former managing editor of Time magazine. But in the nearly three years since he joined the State Department, Stengel has seen the rise of what he calls a “post-truth” world, where the facts are sometimes overwhelmed by propaganda from Russia and the Islamic State.

“We like to think that truth has to battle itself out in the marketplace of ideas. Well, it may be losing in that marketplace today,” Stengel warned in an interview. “Simply having fact-based messaging is not sufficient to win the information war.”

It troubles me that the former managing editor of Time either believes that the “post-truth” world just started in the last three years or that he never noticed it while at Time. I suppose that could explain a lot about the failures of both our “public diplomacy” efforts and traditional media.

Note that Stengel sees the propaganda war as a battle in the “marketplace of ideas.”

It’s not until 10 paragraphs later — after Stengel and Ignatius air the opinion that “social media give[s] everyone the opportunity to construct their own narrative of reality” and a whole bunch of inflamed claims about Russian propaganda — that Ignatius turns to the arbiters of that marketplace: the almost entirely US-based companies that provide the infrastructure of this “marketplace of ideas.” Even there, Ignatius doesn’t explicitly consider what it means that these are American companies.

The best hope may be the global companies that have created the social-media platforms. “They see this information war as an existential threat,” says Stengel. The tech companies have made a start: He says Twitter has removed more than 400,000 accounts, and YouTube daily deletes extremist videos.

The real challenge for global tech giants is to restore the currency of truth. Perhaps “machine learning” can identify falsehoods and expose every argument that uses them. Perhaps someday, a human-machine process will create what Stengel describes as a “global ombudsman for information.”

Watch this progression very closely: Stengel claims social media companies see this war as an existential threat. He then points to efforts — demanded by the US government under threat of legislation, though that goes unmentioned — that social media accounts remove “extremist videos,” with extremist videos generally defined as Islamic terrorist videos. Finally, Stengel puts hope on a machine learning global ombud for information to solve this problem.

Stengel’s description of the problem reflects several misunderstandings.

First, the social media companies don’t see this as an existential threat (though they may see government regulation as such). Even after Mark Zuckerberg got pressured into taking some steps to stem the fake news that had been key in this election — spread by Russia, right wing political parties, Macedonian teenagers, and US-based satirists — he sure didn’t sound like he saw any existential threat.

After the election, many people are asking whether fake news contributed to the result, and what our responsibility is to prevent fake news from spreading. These are very important questions and I care deeply about getting them right. I want to do my best to explain what we know here.

Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.

[snip]

This has been a historic election and it has been very painful for many people. Still, I think it’s important to try to understand the perspective of people on the other side. In my experience, people are good, and even if you may not feel that way today, believing in people leads to better results over the long term.

And that’s before you consider reports that Facebook delayed efforts to deal with this problem for fear of offending conservatives, or the way Zuckerberg’s posts seem to have been disappearing and reappearing like a magician’s bunny.

Stengel then turns to efforts to target two subsets of problematic content on social media: terrorism videos (definedin a way that did little or nothing to combat other kinds of hate speech) and fake news.

The problem with this whack-a-mole approach to social media toxins is that it ignores the underlying wiring, both of social media and of the people using social media. The problem seems to have more to do with how social media magnifies normal characteristics of humans and their tribalism.

[T]wo factors—the way that anger can spread over Facebook’s social networks, and how those networks can make individuals’ political identity more central to who they are—likely explain Facebook users’ inaccurate beliefs more effectively than the so-called filter bubble.

If this is true, then we have a serious challenge ahead of us. Facebook will likely be convinced to change its filtering algorithm to prioritize more accurate information. Google has already undertaken a similar endeavor. And recent reports suggest that Facebook may be taking the problem more seriously than Zuckerberg’s comments suggest.

But this does nothing to address the underlying forces that propagate and reinforce false information: emotions and the people in your social networks. Nor is it obvious that these characteristics of Facebook can or should be “corrected.” A social network devoid of emotion seems like a contradiction, and policing who individuals interact with is not something that our society should embrace.

And if that’s right — which would explain why fake or inflammatory news would be uniquely profitable for people who have no ideological stake in the outcome — then the wiring of social media needs to be changed at a far more basic level to neutralize the toxins (or social media consumers have to become far more savvy in a vacuum of training to get them to do so).

But let’s take a step back to the way Ignatius and Stengel define this. The entities struggling with social media include more than the US, with its efforts to combat Islamic terrorist and Russian propagandist content it hates. It includes authoritarian regimes that want to police content (America’s effort to combat content it hates in whack-a-mole fashion will only serve to legitimize those efforts). It also includes European countries, which hate Russian propaganda, but which also hate social media companies’ approach to filtering and data collection more generally.

European bureaucrats and activists, to just give one example, think social media’s refusal to stop hate speech is irresponsible. They see hate speech as a toxin just as much as Islamic terrorism or Russian propaganda. But the US, which is uniquely situated to pressure the US-based social media companies facilitating the spread of hate speech around the world, doesn’t much give a damn.

European bureaucrats and activists also think social media collect far too much information on its users; that information is one of the things that helps social media better serve users’ tribal instincts.

European bureaucrats also think American tech companies serve as a dangerous gateway monopolizing access to information. The dominance of Google’s ad network has been key to monetizing fake and other inflammatory news (though they started, post-election, to crack down on fake news sites advertising through Google).

The point is, if we’re going to talk about the toxins that poison the world via social media, we ought to consider the ways in which social media — enabled by characteristics of America’s regulatory regime — is structured to deliver toxins.

It may well be that the problem behind America’s failures to compete in the “marketplace of ideas” has everything to do with how America has fostered a certain kind of marketplace of ideas.

The anti-Russian crusade keeps warning that Russian propaganda might undermine our own democracy. But there’s a lot of reason to believe red-blooded American social media — the specific characteristics of the global marketplace of ideas created in Silicon Valley — is what has actually done that.

Update: In the UK, Labour’s Shadow Culture Secretary, Tom Watson, is starting a well-constructed inquiry into fake news. One question he asks is the role of Twitter and Facebook.

Update: Here’s a summary of fake news around the world, some of it quite serious, though without a systematic look at Facebook’s role in it.

Monday: A Border Too Far

In this roundup: Turkey, pipelines, and a border not meant to be crossed.

It’s nearly the end of the final Monday of 2016’s General Election campaign season. This shit show is nearly over. Thank every greater power in the universe we made it this far through these cumulative horrors.

Speaking of horrors, this Monday’s movie short is just that — a simple horror film, complete with plenty of bloody gritty gore. Rating on it is mature, not for any adult content but for its violence. The film is about illegal immigrants who want more from life, but it plays with the concepts of alien identity and zombie-ism. Who are the illegals, the aliens, the zombies? What is the nature of the predator and their prey? Does a rational explanation for the existence of the monstrous legitimize the horror they perpetuate in any way?

The logline for this film includes an even shorter tag line: Some borders aren’t meant to be crossed. This is worth meditating on after the horrors we’ve seen this past six months. Immigrants and refugees aren’t the monsters. And women aren’t feeble creatures to be marginalized and counted out.

Should also point out this film’s production team is mostly Latin American. This is the near-future of American storytelling and film. I can’t wait for more.

Tough Turkey
The situation in Turkey is extremely challenging, requiring diplomacy a certain Cheeto-headed candidate is not up to handling and will screw up if he places his own interests ahead of that of the U.S. and the rest of the world.

  • Luxembourg’s foreign minister compares Erdoğan’s purge to Nazi Germany (Deutsche Welle) — Yeah, I can’t argue with this when a political party representing an ethnic minority and a group sharing religious dogma are targeted for removal from jobs, arrest and detention.
  • Op-Ed: Erdoğan targeting critics of all kinds (Guardian) — Yup. Media, judges, teachers, persons of Kurdish heritage or Gulenist religious bent, secularists, you name it. Power consolidation in progress. Democracy, my left foot.
  • HDP boycotts Turkish parliament after the arrest of its leaders (BBC) — Erdoğan claimed the arrested HDP leaders were in cahoot with the PKK, a Kurdish group identified as a terrorist organization. You’ll recall HDP represents much of Turkey’s Kurdish minority. But Erdoğan also said he doesn’t care if the EU calls him a dictator; he said the EU abets terrorism. Sure. Tell the cities of Paris and Brussels that one. Think Erdoğan has been taking notes from Trump.
  • U.S. and Turkish military leaders meet to work out Kurd-led ops against ISIS (Guardian) — Awkward. Turkish military officials were still tetchy about an arrangement in which Kurdish forces would act against ISIS in Raqqa, Syria, about 100 miles east of Aleppo. The People’s Protection Units (YPG) militia — the Kurdish forces — will work in concert with Arab members of Syrian Democratic Forces (SDF) coalition in Raqqa to remove ISIS. Initial blame aimed at the PKK for a car bomb after HDP members were arrested heightened existing tensions between Erdoğan loyalists and the Kurds, though ISIS later took responsibility for the deadly blast. Depending on whose take one reads, the Arab part of SDF will lead the effort versus any Kurdish forces. Turkey attacked YPG forces back in August while YPG and Turkey were both supposed to be routing ISIS.

In the background behind Erdoğan’s moves to consolidate power under the Turkish presidency and the fight to eliminate ISIS from Syria and neighboring territory, there is a struggle for control of oil and gas moving through or by Turkey.

Russia lost considerable revenue after oil prices crashed in 2014. A weak ruble has helped but to replace lost revenue based on oil’s price, Russia has increased output to record levels. Increase supply only reduces price, especially when Saudi Arabia, OPEC producers, and Iran cannot agree upon and implement a production limit. If Russia will not likewise agree to production curbs, oil prices will remain low and Russia’s revenues will continue to flag.

Increasing pipelines for both oil and gas could bolster revenues, however. Russia can literally throttle supply near its end of hydrocarbon pipelines and force buyers in the EU and everywhere in between to pay higher rates — the history of Ukrainian-Russian pipeline disputes demonstrates this strategy. Bypassing Ukraine altogether would help Russia avoid both established rates and conflict there with the west. The opportunities encourage Putin to deal with Erdoğan, renormalizing relations after Turkey shot down a Russian jet last November. Russia and Turkey had met in summer of 2015 to discuss a new gas pipeline; they’ve now met again in August and in October to return to plans for funding the same pipeline.

A previous pipeline ‘war’ between Russia and the west ended in late 2014. This conflict may only have been paused, though. Between Russia’s pressure to sell more hydrocarbons to the EU, threats to pipelines from PKK-attributed terrorism and ISIS warfare near Turkey’s southwestern border, and implications that Erdoğan has been involved in ISIS’ sales of oil to the EU, Erdoğan may be willing to drop pursuit of EU membership to gain more internal control and profit from Russia’s desire for more hydrocarbon revenues. In the middle of all this mess, Erdoğan has expressed a desire to reinstate the death penalty for alleged coup plotters and dissenters — a border too far for EU membership since death penalty is not permitted by EU law.

This situation requires far more diplomatic skill than certain presidential candidates will be able to muster. Certainly not from a candidate who doesn’t know what Aleppo is, and certainly not from a candidate who thinks he is the only solution to every problem.

Cybery miscellany

That’s it for now. I’ll put up an open thread dedicated to all things election in the morning. Brace yourselves.

Thursday Morning: Snowed In (Get It?)

Yes, it’s a weak information security joke, but it’s all I have after shoveling out.

Michigan’s winter storm expanded and shifted last night; Marcy more than caught up on her share of snow in her neck of the woods after all.

Fortunately nothing momentous in the news except for the weather…

Carmaker Nissan’s LEAF online service w-i-d-e open to hackers
Nissan shut down its Carwings app service, which controls LEAF model’s climate control systems. Carwings allows vehicle owners to check information about their cars on a remote basis. Some LEAF owners conducted a personal audit and hacked themselves, discovering their cars were vulnerable to hacking by nearly anyone else. Hackers need only the VIN as userid and no other authentication to access the vehicle’s Carwings account. You’d think by now all automakers would have instituted two-factor authentication at a minimum on any online service.

Researcher says hardware hack of iPhone may be possible
With “considerable financial resources and acumen,” a hardware-based attack may work against iPhone’s passcode security. The researcher noted such an attempt would be very risky and could destroy any information sought in the phone. Tracing power usage could also offer another opportunity at cracking an iPhone’s passcode, but the know-how is very limited in the industry. This bit from the article is rather interesting:

IOActive’s Zonenberg, meanwhile, told Threatpost that an invasive hardware attack hack is likely also in the National Security Agency’s arsenal; the NSA has been absent from discussions since this story broke last week.

“It’s been known they have a semiconductor [fabrication] since January 2001. They can make chips. They can make software. They can break software. Chances are they can probably break hardware,” he said. “How advanced they were, I cannot begin to guess.”

The NSA has been awfully quiet about the San Bernardino shooter’s phone, haven’t they?

‘Dust Storm’: Years-long cyber attacks focused on intel gathering from Japanese energy industry
“[U]sing dynamic DNS domains and customized backdoors,” a nebulous group has focused for five years on collecting information from energy-related entities in Japan. The attacks were not limited to Japan, but attacks outside Japan by this same group led back in some way to Japanese hydrocarbon and electricity generation and distribution. ‘Dust Storm’ approaches have evolved over time, from zero-day exploits to spearfishing, and Android trojans. There’s something about this collected, focused campaign which sounds familiar — rather like the attackers who hacked Sony Pictures? And backdoors…what is it about backdoors?

ISIS threatens Facebook’s Zuckerberg and Twitter’s Dorsey
Which geniuses in U.S. government both worked on Mark Zuckerberg and Jack Dorsey about cutting off ISIS-related accounts AND encouraged revelation about this effort? Somebody has a poor grasp on opsec, or puts a higher value on propaganda than opsec.

Wonder if the same geniuses were behind this widely-reported meeting last week between Secretary of State John Kerry and Hollywood executives. Brilliant.

Case 98476302, Don’t text while walking
So many people claimed to have bumped their heads on a large statue while texting that the statue was moved. The stupid, it burns…or bumps, in this case.

House Select Intelligence Committee hearing this morning on National Security World Wide Threats.
Usual cast of characters will appear, including CIA Director John Brennan, FBI Director James Comey, National Counterterrorism Center Director Nicholas Rasmussen, NSA Director Admiral Michael Rogers, and Defense Intelligence Agency Director Lieutenant General Vincent Stewart. Catch it on C-SPAN.

Snow’s supposed to end in a couple hours, need to go nap before I break out the snow shovels again. À plus tard!