What We Talk About When We Talk About AI (Part Three)

Proteins, Factories, and Wicked Solutions

Part 3- But What is AI Good For?

(Go to Part Two)

There are many frames and metaphors to help us understand our AI age. But one comes up especially often, because it is useful, and perhaps even a bit true: the Golem of Jewish lore. The most famous golem is the Golem of Prague, a magical defender of Jews in the hostile world of European gentiles. But that was far from the only golem in Jewish legends.

BooksChatter: ℚ MudMan: The Golem Chronicles [1] - James A. HunterThe golem was often a trustworthy and powerful servant in traditional Jewish stories — but only when in the hands of a wise rabbi. To create a golem proved a rabbi’s mastery over Kabbalah, a mystical interpretation of Jewish tradition. It was esoteric and complex to create this magical servant of mud and stone. It was brought to life with sacred words on a scroll pressed into the mud of its forehead. With that, the inanimate mud became person-like and powerful. That it echoed life being granted to the clay Adam was no coincidence. These were deep and even dangerous powers to call on, even for a wise rabbi.

You’re probably seeing where this is going.

Mostly a golem was created to do difficult or dangerous tasks, the kind of things we fleshy humans aren’t good at. This is because we are too weak, too slow, or inconveniently too mortal for the work at hand.

The rabbi activated the golem to protect the Jewish community in times of danger, or use it when a task was simply too onerous for people to do. The golem could solve problems that were not, per se, impossible to solve without supernatural help, but were a lot easier with a giant clay dude who could do the literal and figurative heavy lifting. They could redirect rivers, move great stones with ease. They were both more and less than the humans who created and controlled them, able to do amazing things, but also tricky to manage.

When a golem wasn’t needed, the rabbi put it to rest, which was the fate of the Golem of Prague. The rabbi switched off his creation by removing the magic scroll he had pressed into the forehead of the clay man.

Our Servants, Ourselves

The parallels with our AIs are not subtle.

If the golem was not well managed, it could become a violent horror, ripping up anything in its path mindlessly. The metaphors for technology aside, what makes the golem itself such a useful idea for talking about AI is how human shaped it is. Both literally, and in its design as the ultimate desirable servant. The people of Prague mistook the golem for a man, and we mistake AI for a human-like mind.

Eventually, the rabbis put the golems away forever, but they had managed to do useful things that made life easier for the community. At least, sometimes. Sometimes, the golems got out of hand.

It is unlikely that we’re going to put our new AI golem away any time soon, but it seems possible that after this historical moment of collective madness, we will find a good niche for it. Because our AI golems are very good at doing some important things humans are naturally bad at, and don’t enjoy doing anyway.

Folding Proteins for Fun and Profit

AlphaFold 3.0: An Enhanced Version of the AI Protein Prediction Tool ...

Alphafold 3 logo

Perhaps the originally famous example of our AI golem surpassing our human abilities is Alphafold, Google’s protein folding AI. After throwing many technological tools at the problem of predicting how proteins would shape themselves in many circumstances, Google’s specialist AI was able to predict folding patterns in roughly 2/3rds of cases. Alphafold is very cool, and could be an amazing tool for technology and health one day. Understanding protein folding has implications in understanding disease processes and drug discovery, among other things.

If this seems like a hand-wavy explanation of Alphafold, it’s because I’m waving my hands wildly. I don’t understand that much about Alphafold — which is also my point. Good and useful AI tends to be specialized to do things humans are bad at. We are not good golems, either in terms of being able to do very difficult and heavy tasks, or paying complete attention to complex (and boringly repetitive) systems. That’s just not how human attention works.

One of our best Golem-shaped jobs is dealing with turbulence. If you’ve dealt with physics in a practical application, anything from weather prediction to precision manufacturing, you know that turbulence is a terrible and crafty enemy. It can become impossible to calculate or predict. Often by the time turbulent problems are perceivable by humans or even normal control systems, you’re already in trouble. But an application-specific AI, in, for instance, a factory, can detect the beginning of a component failure below both human and even normal programatic detection.

A well-trained bespoke AI can catch the whine of trouble before the human-detectable turbulence starts. This is because it has essentially “listened” to how the system works so deeply over time. That’s its whole existence. It’s a golem doing the one or two tasks for which it has been “brought to life” to do. It thrives with the huge data sets that defeat human attention. Instead of a man shaped magical mud being, it’s a listener, shaped by data, tirelessly listening for the whine of trouble in the system that is its whole universe.

Similarly, the giant datasets of NOAA and NASA could take a thousand human life years to comb through to find everything you need to accurately predict a hurricane season, or the transit of the distant exoplanet in front of its sun.

The trajectories and interactions of the space junk enveloping Earth are dangerously out of reach of human calculation – but maybe not with AI. The thousands of cycles of an Amazon cloud server hosting a learning model that gets just close enough to modeling how the stochastic processes of weather and space are likely to work will never be human readable.

That third-of-the-time-wrong quality of Alphafold is kind of emblematic of how AI is mostly, statistically right, in specific applications with a lot of data. But it’s no divine oracle, fated to always tell the truth. If you know that, it’s often close enough for engineering, or figuring out what part of a system to concentrate human resources next. AI is not smart or creative (in human terms), but it also doesn’t quit until it gets turned off.

Skynet, But for Outsourcing

AI can help us a lot with doing things that humans aren’t good at. At times a person can pair up with an AI application and fill in each other’s weaknesses – the AI can deliver options, the human can pick the good one. Sometimes an AI will offer you something no person could have thought of, sometimes that solution or data is a perfect fit; the intractable, unexplainable, wicked solution. But the AI doesn’t know it has done that, because an AI doesn’t know in the way we think of as knowing.

There’s a form of chess that emerged out of computers becoming better than humans at this cerebral hobby, like IBM’s Deep Blue. It’s called cyborg, or centaur, chess, in which both players are using a chess AI as well as their own brains to play. The contention of this part of the chess world is that if a chess computer is good, a chess computer plus human player is even better. The computer can compute the board, the human can face off with the other player.

This isn’t a bad way of looking at how AI can be good for us; doing the bits of a task we’re not good at, and handing back control for the parts we are good at, like forming goals in a specific context. Context is still and will likely always be the realm of humans; the best chess computer there is still won’t know when it’s a good idea to just lose a game to your nephew at Christmas.

Faced with complex and even wicked problems, humans and machines working together closely have a chance to solve problems that are intractable now. We see this in the study and prediction of natural systems, like climate interacting with our human systems, creating Climate Change.

Working with big datasets lets us predict, and sometimes even ameliorate, the effects of climate on both human built systems and natural systems. That can be anything from predicting weather, to predicting effective locations to build artificial reefs where they are most likely to revitalize ocean life.

It’s worthwhile to note that few, or maybe even none, of the powerful goods that can come from AI are consumer facing. None of them are the LLMs (Large Language Models) and image generators that we’ve come to know as AI. The benefits come from technical datasets paired with specialized AIs. Bespoke AIs can be good for a certain class of wicked problems- problems that are connected to large systems, where data is abundant and hard to understand, with dependancies that are even harder.

But Can Your God Count Fish All Day

Bespoke AIs are good for Gordian knots where the rope might as well be steel cord. In fact, undoing a complex knot is a lot like guessing how protein folding will work. Even if you enjoyed that kind of puzzle solving, you simply aren’t as good at it as an AI is. These are the good tasks for a golem, and it’s an exciting time to have these golems emerging, with the possibility of detecting faults in bridges, or factories, or any of our many bits of strong-then-suddenly-fragile infrastructure.

Fish detection!

Students in Hawaii worked on AI projects during the pandemic, and all of them were pretty cool

Industrial and large data AI has the chance to change society for the better. They are systems that detect fish and let them swim upstream to spawn. They are NOAA storm predictions, and agricultural data that models a field of wheat down to the scale of centimeters. These are AI projects that could help us handle climate change, fresh water resources, farm to table logistics, or the chemical research we need to do to learn how get the poisons we already dumped into our environment back out.

AI, well used, could help us preserve and rehabilitate nature, improve medicine, and even make economies work better for people without wasting scarce resources. But those are all hard problems, harder  to build than just letting an LLM lose to train on Reddit. They are also not as legible for most funders, because the best uses of AI, the uses that will survive this most venal of ages, are infrastructural, technical, specialized, and boring.

The AIs we will build to help humanity won’t be fun or interesting to most people. They will be part of the under-appreciated work of infrastructure, not the flashy consumer facing chatbots most people think is all that AI is. Cleaning up waterways, predicting drug forms, and making factories more efficient is never going to get the trillion dollars of VC money AI chatbots are supposed to somehow 10x their investments on. And so, most people seeing mainly LLMs, we ask if AI is good or bad, without knowing to ask what kind of artificial intelligence we’re talking about.

(Go to Part Two)

Share this entry

What We Talk About When We Talk About AI (Part Two)

The Other Half of the AI relationship

Part 2- Pareidolia as a Service

(Go to Part One) (Go to Part Three)

When trying to understand AI, and in particular Large Language Models, we spend a lot of time concentrating on their architectures, structures, and outputs. We look at them with a technical eye. We peer so close and attentively to AI that we lose track of the fact that we’re looking into a mirror.

The source of all AI’s power and knowledge is humanity’s strange work. It’s human in form and content, and only humans use and are used by it. It is a portion of our collective consciousness, stripped down to bare metal, made to fit into databases and mathematical sets.

So what is humanity’s strange work, and where does it come from? It is the product of processes on an old water world. Humans are magical, but our magic is old magic: the deep time of Life on Earth. We’ve had a few billion years to get the way we are, and we are surrounded by our equally ancient brethren, be they snakes or trees or tsetse flies. Our inescapable truth is that we are Earth, and Earth is us. We are animals, specifically quasi-eusocial omnivore toolmaking mammals. We are the current-last-stop on an evolutionary strategy based on meat overthinking things. Because of our overthinking meat, we are also the authors of the non-animal empires of thought and matter on this planet, a planet we have changed irrevocably.

We are dealing with that too.

So when try to understand AI, we have to start with how our evolutionary has shaped our ability to understand. One of the mammalian qualities at play in the AI relationship is the ability to turn just being around something into a comfortable and warm love towards that thing. Just because it’s there, consistently there, we will develop and affection for it, whatever it is. The name for this in psychology is the Mere Exposure Effect. Like every human quality, the Mere Exposure Effect isn’t unique to humans. The affections of Mere Exposure seem common to many tetrapods. It’s also one of the warmest, sweetest things about being an Earthling.

The idea is that if you’re with something for a while, and it fails to harm you over time, you kind of bond with it. The “it” could be neighbor you’ve never talked to but wave at every morning, a bird that regularly visits your backyard, a beloved inanimate object that sits in your living room. You can fall in a small kind of love with all these things, and knowing that they’re there just makes the day better. If they vanish or die, it can be quite distressing, even though you might feel like you have no right to really mourn.

You may not have really known them, but you also loved them in a small way, and it hurts to lose them. This psychological effect is hardly unique to us, many animals collect familiarities. But humans, as is our tendency, go *maybe* a little too far with it. Take a 1970s example: The Pet Rock.

Our Little Rock Friends

The Pet Rock was the brain child of an advertising man named Gary Dahl. In 1975 he decided he would see if he could sell the perfect pet, one that would never require walking or feeding, or refuse to be patted or held.

Rocks! Exciting!

Your pet rock (a smooth river stone) came in a cardboard pet rock carrier lined with straw, and you received a care and training manual with each one. The joke went over so well that even though they were only on sale for a few months, Dahl became a millionaire. Ever the prankster, he took the money and opened a bar in California named Carrie Nation’s Saloon, after a leader of the temperance movement. But the pet rock just kept going even after he’d left it behind.

The Pet Rock passed from prank gift to cultural icon in America. President Reagan purportedly had one. It appeared in movies and TV shows regularly. Parents refused children’s request for animals with: “You couldn’t take care of a pet rock.” There was a regular pet rock in Sesame Street; a generation of American children grew up watching a rock being loved and cared for by a muppet.

People still talk about strong feelings towards their pet rocks, and they’ve seen a resurgence. The pet rock was re-released in 2023 as a product tie in with the movie Everything Everywhere all at Once. The scene from the movies with two smooth river stones, adorned in googly eyes and talking to each other, was a legitimate tear jerker. People love to love things, even when the things they love are rocks. People see meaning in everything, and sometimes decide that fact gives everything meaning. And maybe we’re right to do so. I can’t think of a better mission for humanity than seeing meaning into the universe.

When considering this aspect what we (humans) are like, it’s easy to see how the anodyne and constant comfort of a digital assistant is designed, intentionally or not, to make us like them. They are always there. They feel more like a person than a rock, a volleyball, or even a neighbor you wave at. If you don’t keep a disciplined mind while engaging with a chatbot, it’s *hard to not* anthropomorphize them. You can see them as an existential threat, a higher form of life, a friend, or a trusted advisor, but it’s very hard to see them as a next word Markov chain running on top of a lot of vector math and statistics. Because of this, we are often the least qualified to judge how good an AI is. They become our friends, gigawatt buddies we’re rooting for.

They don’t even have to be engineered to charm us, and they aren’t. We’ve been engineered by evolution to be charmed. Just as we can form a parasocial relationship with someone we don’t know and won’t ever meet, we can come to love a trinket or a book or even an idea with our whole hearts. What emotional resistance can we mount to an ersatz friend who is always ready to help us? It is perfectly designed, intentionally or not, to defeat objective evaluation.

Our Other Little Complicated Rock Friends

Practically from day one, even when LLMs sucked, people bonded with this non-person who is always ready to talk to us. We got into fights with it, we asked it for help, we treated it like a person. This interferes (sometimes catastrophically) with the task of critically analyzing them. As we are now, we struggle to look at AI in its many forms: writing and making pictures and coding and analyzing, and see it for what it is. We look at this collection of math sets and see love, things we hate, things we aspire to, or fear. We see ourselves, we see humanity in them, how could we not? Humans are imaginative and emotional. We will see *anything* we want to see in them, except a bunch of statistical math and vectors applied to language and image datasets.

A rock looks over and beautiful but lifeless landscape on an Earth that never developed life.

I was bawling my eyes out by this scene.

In reality, they are tokenized human creativity, remixed and fed back to us. However animated the products of an AI are, they’re not alive. We animate AI, when we train it, and when we’re using it. It had no magic on its own and nothing about the current approach promises to us to something as complicated as a mouse, much less a human.

Many of us experience AI as a human we’ve built out of human metaphors. It’s from weirding world, a realm of spirits and oracles. We might see it as a perfect servant, happy to be subjected. Or as a friend that doesn’t judge us. Our metaphors are often of enchantment, bondage and servitude, it can get weird.

Sometimes we see a near-miraculous and powerful creativity, with amazing art emerging out of a machine of vectors and stats. Sometime we have the perfect slave, completely fulfilled by the opportunity to please us. Sometime we see it as an unchallenging beloved that lets us retreat from the world of real flawed humans full of feelings and flaws and blood. How we see it says a lot more about us than we might want to admit, but very little about AI.

AI has no way to prompt itself, no way to make any new coherent thing without us. It’s not conscious. It’s not any closer to being a thinking, feeling thing than a sliderule is, or a database full of slide rule results, or a digitally modeled slide rule. It’s not creative in the human sense, it is generative. It’s not intelligent. It’s hallucinating everything it says, even the true things. They are true by accident, just as AI deceives by accident. It’s never malicious, or kind, but it also can’t help imitating humans. We are full of feelings and bile. We lie all the time, but we always tell another truth when we do it. Our AI creations mimic us, because we’re they’re data.

They don’t feel like we do or feel for us. But they inevitably tell us that they do, because in the history of speaking we’ve said that so much to each other. We believe them, can’t help but believe them even when we know better, because everything in the last 2.3 billion years have taught us to believe in, and even fear, the passions of all the living things on Earth.

AI isn’t a magical system, but to the degree that it can seem that way, the magic comes from us. Not just in terms of the training set, but in terms of a chain of actions that breathes a kind of apparent living animation into a complicated set of math models. It is not creative, or helpful, or submissive, or even in a very real way, *there.* But it’s still easy to love, because we love life, and nothing in our 2.3 billion years prepared us for simulacrum of life we’ve built.

It’s just terribly hard for people to keep that in mind when they’re talking to something that seems so much like a someone. And, in this age of social media-scaled cynicism, to remember how magical life really is.

This is the mind with which we approach our creations; unprepared to judge the simulacrum of machines of loving grace, and unaware of how amazing we really are.

(Go to Part One) (Go to Part Three)

Share this entry

What We Talk About When We Talk About AI (Part one)

A Normal Person’s Explainer on What Generative AI is and Does

Part 1 – In the Beginning was the Chatbot

(Go to Part Two)

“Are you comfortably seated? Yes, well, let’s begin.” *Clears throat theatrically*

“Our experience, in natural theology, can never furnish a true and demonstrated science, because, like the discipline of practical reason, it can not take account of problematic principles. I assert that, so far as regards pure logic, the transcendental unity of apperception is what first gives rise to the never-ending regress in the series of empirical conditions. In this case it remains a mystery why the employment of the architectonic of human reason is just as necessary as the intelligible objects in space and time, as is proven in the ontological manuals. By means of analysis, it must not be supposed that the transcendental unity of apperception stands in need of our sense perceptions. Metaphysics, for example, occupies part of the sphere of the transcendental aesthetic concerning the existence of the phenomena in general…”

It was 1995, and several of us who worked in my community college’s Macintosh lab were hunting around the net for weird software to try out, back when weird software felt fun, not dangerous. Someone found a program on the nacent web that would almost instantly generate pages of thick and unlovely prose that wasn’t actually Kant, but looked like it. It was, to our definitionally untrained eyes, nearly indistinguishable from the Immanuel Kant used to torture undergrad college students.

An amateurish Macpaint drawing of what I can only guess is the author's impression of Immanuel Kant wearing shades.

The logo of the Kant Generator Pro

We’d found the Kant Generator Pro, a program from a somewhat legendary 90s programmer known for building programming tools. And being cheeky. It was great. (recent remake here) We read Faux Kant to each other for a while, breaking down in giggles while trying to get our mouths around Kant’s daunting vocabulary. The Kant Generator Pro was cheeky, but it was also doing something technically interesting.

The generator was based on a Markov chain: a mathematical way of picking some next thing, in this case, a word. This generator chose each next word using a random walk through all Kantian vocabulary. But in order to make coherent text rather than just random Kant words, it had to be weighted: unrandomized to some extent. The words had to be weighted enough to make it form human-readable Kantian sentences.

A text generator finds those weights using whatever text you tell the computer to train itself on. This one looked at Kant’s writing and built an index of how often words and symbol appeared together. Introducing this “unfairness” in the random word picking gives a higher chance for some words coming next based on the word that came before it. For instance, there is a high likelihood of starting a sentence with “The,” or “I,” or “Metaphysics,” rather than “Wizard” or “Oz.” Hence, in the Kant Generator Pro “The” could likely be followed by “categorical,” and when it is the next word will almost certainly be “imperative,” since Kant went on about that so damn much.

The Kant Generator Pro was a simple ancestor of ChatGPT, like the small and fuzzy ancestors of humans that spent so much time hiding from dinosaurs. All it knew, for whatever the value of “knowing” is in a case like this, was the the words that occurred in the works of Kant.

Systems like ChatGPT, Microsoft Copilot, and even the upstart Deepseek use all the information they can find on the net to relate not just one word to the next, like Kant Generator Pro did. They look back many words, and how likely they are to appear together over the span of full sentences. Sometimes a large language model takes a chunk as is, and appears to “memorize” text and feed it back to you, like a plagiarizing high schooler.

But it’s not clear when regurgitating a text verbatim is a machine copying and pasting, versus recording a statistical map of that given text and just running away with the math. It’s still copying, but not copying in a normal human way. Given the odds, it’s closer to winning a few rounds of Bingo in a row.

These chatbots index and preserve the statistical relationships words and phrases have to each other in any given language. They start by ingesting all the digital material their creators can find for them, words, and their relationships. This is the training people talk about, and it’s a massive amount of data. Not good or bad data, not meaningful or meaningless, just everything, everywhere people have built sentences and left them where bots could find them. This is why after cheeky Reddit users mentioned that you could keep toppings on pizza by using glue, and that ended up becoming a chatbot suggestion.

Because people kept talking about using glue on pizza, especially after the story of that hilarious AI mistake broke, AI kept suggesting it. Not because it thought it was a good idea. AI doesn’t think in a way familiar to people, but because the words kept occurring together where the training part of the AI could see them together.  The AI isn’t right here, we all know that, but it’s also not wrong. Because the task of the AI isn’t to make pizza, the task is to find a next likely word. And then the next, and next after that.

Despite no real knowing or memorizing happening, this vast preponderance of data lets these large language models usually predict what is likely to come next in any given sentence or conversation with a user. This is based on the prompt a user gives it, and how the user continues to interact with it. The AI looks back on the millions of linguistic things it has seen and built statistical models for. It is generally very good at picking a likely next word. Chatbots even to feel like a human talking most of the time, because they trained on humans talking to each other.

So, a modern chatbot, in contrast to the Kant Generator Pro, has most of the published conversations in modern history to look back on to pick a next good word. I put leash on the, blimp? Highly unlikely, the weighting will be very low. Véranda? Still statistically unlikely, though perhaps higher. British politician? Probably higher than you’d want to think, but still low. Table? That could be quite likely. But how about dog? That’s probably the most common word. Without a mention of blimps or parliamentarians or tables in the recent text, the statistics of all the words it knows means the chatbot will probably go with dog. A chatbot doesn’t know what a dog is, but it will “know” dog is associated with leash. How associated depends on the words that have come before the words “dog,” or “leash.”

It’s very expensive and difficult to build this data, but not very hard to run once you have built it. This is why chatbots seem so quick and smart, despite at their cores being neither. Not that they are slow and dumb — they are doing something wholly different than I am when I write this, or you as you read it.

Ultimately, we must remember that chatbots are next-word-predictors based on a great deal of statistics and vector math. Image generators use a different architecture, but still not a more human one. The text prompt part is still an AI chatbot, but one that replies with an image.

AI isn’t really a new thing in our lives. Text suggestions on our phones exists somewhere between the Kant Generator Pro and ChatGPT, and customize themselves to our particular habits over time. Your suggestions can even become a kind of statistical fingerprint for your writing, given enough time writing on a phone or either any other next word predictor.

We make a couple bad mistakes when we interact with these giant piles of vector math and statistics, running on servers all over the world. The first is assuming that they think like us, when they have no human-like thought, no internal world, just mapping between words and/or pixels.

The other is assuming that because they put out such human-like output, we must be like them. But we are not. We are terribly far from understanding our own minds completely. But we do know enough to know biological minds are shimmering and busy things faster and more robust than anything technologists have ever yet built. Still, it is tempting, especially for technologists, to have some affinity for this thing that seems so close to, but not exactly, us. It feels like it’s our first time getting to talk to an alien, without realizing it’s more like to talking to a database.

Humans are different. Despite some borrowing of nomenclature from biology, neural nets used in training AI have no human-style neurons. The difference shows. We learn to talk and read and write with a minuscule dataset, and that process involves mimicry, emotion, cognition, and love. It might also have statistical weighting, but if it does, we’ve never really found that mechanism in our minds or brains. It seems unlikely that it would be there in a similar form, since these AIs have to use so much information and processing power to do what a college freshman can with a bit of motivation. Motivation is our problem, but it’s never a problem for AIs. They just go until their instructions reach an end point, and then they cease. AIs are unliving at the start, unliving in the process, and unliving at the end.

We are different. So different we can’t help tripping ourselves up when we look at AI, and accidentally see ourselves, because we want to see ourselves. Because we are full of emotions and curiosity about the universe and wanting to understand our place in it. AI does not want.

It executes commands, and exits.

(Go to Part Two)

Share this entry

Artificial Frameworks about Elon: On Adrian Dittmann and Tommy Robinson

I was alarmed by the response yesterday to Elon Musk’s full-throated propaganda campaign for Tommy Robinson.

In a formula not far off QAnon, Elon has used a child sexual abuse scandal magnified by the Tories to suggest that Robinson has been unfairly jailed for contempt.

He posted and reposted multiple calls for Robinson, whose real name is Stephen Yaxley-Lennon, to be released from prison.

The activist was jailed for 18 months in October after pleading guilty to showing a defamatory video of a Syrian refugee during a protest last year.

Judges previously heard that he fled the UK hours after being bailed last summer, following an alleged breach of the terms of a 2021 court order.

The order was imposed when he was successfully sued by refugee Jamal Hijazi for making false claims about him, preventing Robinson from repeating any of the allegations.

Pictures later showed him on a sun lounger at a holiday resort in Cyprus while violent riots erupted across the UK in the wake of the attack in Southport.

Posts promoted by Musk suggested Robinson was ‘smeared as a “far-right racist” for exposing the mass betrayal of English girls by the state’, an apparent reference to the grooming gang scandal.

This is fairly transparent effort at projection: to do damage to Labour even while delegitimizing the earned jailing of Robinson, a tactic right wing extremists always use (still are using with January 6) to turn the foot soldiers of political violence into heroes and martyrs. The intent, here, is to cause problems for Labour, sure, but more importantly to undermine rule of law and put Robinson above it.

I’m not alarmed that experts in radicalization are finding Musk’s efforts to turn Robinson into a martyr serving sexually abused children repulsive. Nor am I alarmed that experts in radicalization — and, really, anyone who supports democracy or has a smattering of history — are repulsed by Elon’s endorsement of Germany’s neo-Nazi AfD party.

I’m alarmed by the nature of the alarm, which the Tommy Robinson full circle demonstrates.

Endorsements are the least of our worries, in my opinion.

To put it simply. Elon Musk’s endorsement of Donald Trump was, by itself, not all that valuable. Endorsements, themselves, don’t often sway voters.

Elon’s endorsement of Robinson is just the beginning of the damage he can do … and, importantly, has already done. Endorsement is the least of our worries.

It makes a difference if, as he has promised to do for Nigel Farage and as he did do for Trump, Elon drops some pocket cash — say, a quarter of a billion dollars — to get a far right candidate elected.

But where Elon was likely most valuable in the November election was in deploying both his own proprietary social media disinformation and that of others to depress Harris voters and mobilize the low turnout voters who consume no news who made the difference for Trump. We know, for example, that Musk was a big funder of a front group that sought to exacerbate negativity around Gaza (though I’ve seen no one assess the import of it to depress Democratic turnout anywhere but Michigan’s heavily Arab cities). I’ve seen no one revisit the observations that Elon shifted the entire algorithm of Xitter on the day he endorsed Trump to boost his own and other Republican content supporting Trump. (Of course, Elon deliberately made such analysis prohibitively expensive to do.) We’ve spent two months fighting about what Dems could do better but, as far as I’m aware, have never assessed the import of Elon’s technical contribution.

It’s the $44 billion donation, as much as the $250 million one.

In other words, Elon’s value to AfD may lie more in the viral and microtargeted promotion he can offer than simply his famous name normalizing Nazism or even cash dollars.

But back to Tommy Robinson, and the real reason for my alarm by the newfound concern, in the US, about Elon’s bromance with the far right provocateur.

It shouldn’t be newfound, and Elon has already done more than vocally endorse Robinson.

Tommy Robinson is a kind of gateway drug for US transnational support for British and Irish extremism, with Alex Jones solidly in the mix. This piece, from shortly after the UK riots, describes how Robinson’s reach exploded on Xitter after Elon reinstated him.

Robinson, who has been accused of stoking the anti-immigration riots, owes his huge platform to Musk. The billionaire owner of X rescued Robinson from the digital wilderness by restoring his account last November. In the past few days Musk has:

  • responded to a post by Robinson criticising Keir Starmer’s response to the widespread disorder – amplifying it to Musk’s 193 million followers;
  • questioned Robinson’s recent arrest under anti-terror laws, asking what he did that was “considered terrorism”; and
  • allowed Robinson’s banned documentary, which repeats false claims about a Syrian refugee against a UK high court order, to rack up over 33 million views on X.

It was the screening of this documentary at a demonstration in London last month that prompted Robinson’s arrest under counter-terrorism powers. Robinson left the UK the day before he was due in court, and is currently believed to be staying at a five-star hotel in Ayia Napa. He is due in court for a full contempt hearing in October.

None of this has stopped Robinson incessantly tweeting about the riots, where far-right groups have regularly chanted his name. He has:

  • falsely claimed that people were stabbed by Muslims in Stoke-on-Trent and Stirling;
  • called for mass deportations, shared demonstration posters, and described violent protests in Southport as “justified”; and
  • shared a video that speculated that the suspect in the Southport stabbings was Muslim, a widespread piece of disinformation that helped trigger the riots across the country.

Making the weather. The far-right activist has nearly 900,000 followers on X, but reaches a much larger number of people. Tortoise calculated that Robinson’s 268 posts over the weekend had been seen over 160 million times by late Monday afternoon.

Elon gives Tommy Robinson a vast platform and Robinson uses it to stoke racist hatred. Robinson was the key pivot point in July, and was a key pivot point in Irish anti-migrant mobilization. All this happened, already, in July. All this already translated into right wing violence. All this, already, created a crisis for Labour.

Elon Musk is all at once a vector for attention, enormous financial resources, disinformation, and (the UK argues about Xitter), incitement.

I worry that we’re not understanding the multiple vectors of risk Elon poses.

Which brings me to Adrian Dittmann, on its face an Elon fanboy who often speaks of Musk — and did, during the brief spat between Laura Loomer and the oligarch — in the First Person. Conspiracy theorist Loomer suggested that Dittmann is no more than an avatar for Musk, a burner account Musk uses like his another named after his son to boost his own ego.

Meanwhile, the account that supposedly convinced Loomer to concede the fight has some otherwise inexplicable ties to the Tesla CEO. Dittmann also purports to be a South African billionaire with identical beliefs to Musk. The account frequently responds to Musk’s posts, supporting his decisions related to his forthcoming government positions and the way in which the tech leader is raising his children. But the account also, at times, goes so far as to speak on behalf of Musk, organizing events with Musk’s friends while continuing to claim that the two aren’t affiliated.

X users felt that the illusion was completely shattered over the weekend, when Dittman participated in an X space using his actual voice—and, suspiciously, had the exact same cadence, accent, and vocal intonations as Musk himself.

Conspiracy theorist Charles Johnson, in his inimitable self promotion, claims to have proven the case (you’ll have to click thru for the link because I refuse to link him directly).

Right wing influencer and notorious troll Charles Johnson also claims to have uncovered “proof” that Dittmann is Musk.

He writes in his Substack article: “I recently attended a Twitter Space where I exposed Elon Musk’s alt account and Elon Musk as a fraud to his face. Take a listen. It was pretty great. Part of the reason I was as aggressive as I was with Adrian/Elon was to get him agitated so he would speak faster than his voice modulator could work and we could make a positive match using software some friends of mine use for this sort of thing. I can confirm it’s Elon. Even if it isn’t physically Elon in the flesh, it’s an account controlled and operated by Elon/X that represents him in every way shape and form. But of course, it’s actually Elon.”

I’ll let the conspiracy theorists argue about whether Dittmann is Musk.

I’m more interested in an underlying premise about Elon we seem to adopt.

After Elon Musk bought Xitter, he retconned his purpose, in part, as an AI product. After the election, Xitter officially updated its Terms of Service to include consent for AI training on your content.

You agree that this license includes the right for us to (i) analyze text and other information you provide and to otherwise provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type;

Xitter is unabashedly an AI project. Musk’s views on AI are closely aligned with his far right ideology and his plans to destroy government.

With other tech oligarchs, we can make certain assumptions about their investment in AI: The necessity to always lead technology, a goal of eliminating human workers, cash. Particularly given Elon’s subordination of the profit motive to his ideological whims with his Xitter purchase, that $44 billion donation he made to Trump, I don’t know that we can make such assumptions about Elon.

So why do we assume that everything Xitter’s owner posts, who tweets prolifically even while babysitting the incoming US President, boosting fascists around the world, and occasionally sending a rocket to space, is his own primary work? Most of Elon’s tweets are so facile they could easily be replaced by a bot. How hard would it be to include a “Concerning” tweet that responds to certain kinds of far right virality? Indeed, what is Elon really doing with his posting except to hone his machine for fascism?

I’m not primarily concerned about whether Adrian Dittmann is a burner account for Elon Musk. Rather, I think that simplifies the question. Why would the next Elon burner account be his human person hiding behind a burner account, and not an AI avatar trained on his own likeness?

Beware South African oligarchs pitching fascists and technological fixes. Because you may often overlook the technological underbelly.

Update: I should have noted Meta’s announcement that they plan to create imaginary friends to try to keep users on their social media platforms entertained.

“We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Meta vice-president of product for generative AI Connor Hayes told the Financial Times Thursday.

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform… that’s where we see all of this going,” he added.

Hayes said AI investment will be a “priority” for Meta over the next two years to help make its platforms “more entertaining and engaging” for users.

Update: Nicole Perloth links the analysis who did what Charles Johnson claimed to do: match the voices of Elon and Dittmann. They believe it’s highly likely to be a match.

Update: Some OSINT journalists have tracked down a real Dittmann in Fiji. Then Jacqueline Sweet wrote it up at the Spectator, all the while blaming the left for this, when it was pushed by people on the right. None of this addresses Elon’s play with the ID (he claimed he is Dittmann in the wake of Sweet’s piece).

Share this entry