The Ecology of Brainworms and Cartesian Golems of the Late Anthropocene

On November 2nd, 1988, a piece of code was run on a computer in the network of MIT. This code is extremely simple by modern standards. Its only goal was to spread itself to other computers. It would look for other computers that it could connect to on the network, then it would first try to exploit a flaw in a common networking program. If that didn’t work, it would attempt to log in using common usernames and passwords. If it was successful in gaining access, it would copy itself to the new computer and start again.

Within a few hours, it had infected 6,000 machines, at the time about 1/10th of the Internet.

This piece of code, called the Morris Worm, after its creator Robert Tappan Morris, started a new age of humans breaking machines. For his efforts, Morris was the first person charged with a felony under the Computer Fraud and Abuse Act, the primary anti-hacking statute in the US. It launched into public consciousness the idea of self-replicating computer viruses that can spread among machines.

The terminology of self-replicating malicious code is interestingly parasitic. We talk of “infections” by “viruses” and of “worms” spreading between “hosts”. (Though to be fair, “host” is a common term in technology for a computer running code or processing requests on behalf of someone else. That it also means “the carrier of a disease organism” is merely convergent linguistic evolution.) We created computers and for 40 years or so we’ve been intentionally sickening them and watching as viruses ravage the population.

Morris’ motivations for creating the original worm seem to have been fairly benign. The damage caused by his worm was a side effect of it being more effective at replication than expected, resulting in computers being infected multiple times, slowing them down to unusability. In the modern era, worms have been used for espionage (2010’s Stuxnet) and economic sabotage (2017’s NotPetya attacks). Morris, however, seems to have been motivated only by a desire to see if it was possible.

This Gnostic urge to know if something is possible is the primary motivation for most of the novel developments in the hacking world. Most hackers become hackers not for money or prestige (as a hobby it’s often pretty short on both), but because we created these golems and it’s interesting to see how our toys break.

And in order to break computers, you need to understand how they truly work. Morris’s worm worked because he knew that there was a flaw in the coding of a network utility. He knew that networks of the time often trusted connections from other networks without further authentication. He knew that people used simple, easily-guessed passwords. He understood the truth of the system and so was able to break it.

Hackers win when they understand the truth of the system better than its creators.

In November of 2020, OpenAI released the first public version of ChatGPT, its chatbot powered by a Large Language Model (LLM). While it was the first productized implementation of a new kind of AI systems, it leveraged an architecture known as “transformer” models that had been described in 2017 by scientists working at Google. These systems are trained on massive amounts of data in order to be able to create a matrix of probabilities. Given a particular input, this probability model tells it what the likeliest next “tokens” are to extend the input. It then probabilistically chooses from those tokens in a weighted manner.

That it actually works to create sensible text, images, and other media is impressive. These models have now been expanded to ingest and create documents, hold voice conversations with users, and even take actions autonomously based on user inputs.

They’re also making us lose our minds.

The root of this phenomenon is that these models are designed first and foremost to create outputs that are sensible. They are meant to extend the form and syntax of an input to create a plausible output. They do not, fundamentally, have any measure of truth or correctness. Users who forget this often find themselves, especially when talking interactively with chatbots backed by these models, tricked into believing that they are talking to another mind. We have spent all of human evolution solving the problem of other minds based purely on analogy to ourselves. When that analogy breaks down, our psyches seem to break down with it.1

There are several signs and symptoms of this self-induced cognitive affliction. First, it’s destroying our ability to ground our own writing and expression in reality. In 2023, in a case called Mata v. Avianca, an American lawyer used an AI tool to generate a legal brief. The AI generated several citations on which it based the structure of a legal argument. Several of the citations were entirely fake. They had the correct form of a cited case, but referred to no actual precedent or legal event. When confronted, the lawyer’s response was to ask the AI if the cases were real. The AI assured the lawyer that they were.

The lawyer was eventually sanctioned and the case thrown out.

The rapid generation of plausible-seeming text has also enabled a flood of meaningless, low-quality content to surge into public life. Anyone with access to the Internet the past few years is familiar with the tidal wave of AI “slop”, aesthetically-tuned objects masquerading as art. The Studio Ghibli renderings of selfies and the incoherent, but immediately identifiable, Facebook viral depictions of children in distress, wounded war veterans on forgotten birthdays, and (for no reason that anyone can give an accurate accounting of) statues of horses made of bread rolls.

It’s not just social media that has been inundated with this muck. The rise of LLMs has resulted in a surge of fake papers being submitted to science journals. It has flooded the submission queues of art and literary journals. It has destroyed the signal-to-noise ratio of any avenue of participation in the culture that is open to the public.

For all of modern history, these systems have implicitly relied on an unspoken assumption that generating the artifacts one wants to submit requires labor. Even if that labor is, to paraphrase Truman Capote, not even writing but just typing. When that labor is gone and perfectly formatted, passingly plausible content can be generated in the time it takes to step away from the computer to take a piss, that assumption breaks down and any good work in the submission queue is drowned in a flood of mere verbiage.

This isn’t an abstract concern for the average person, either. Wikipedia, one of the intellectual wonders of the modern world, is having to levy a concerted effort of its already over-burdened volunteer corps to fight off waves of low-quality, inaccurate, AI-generated edits. A load-bearing pillar in our modern cognitive edifice is at risk of collapse if it doesn’t constantly fend off this tide of bullshit.

More than our labor and our shared culture, AI systems are striking at our own identities and reputations. AI models are being used to create convincing fake images and videos of us. Sora 2, the latest video model from Meta, has already been used to create videos that appear to show security camera footage of shoplifting and other crimes. In early 2024, a wave of sexualized deepfakes of Taylor Swift swept across social media. And while the corporate creators of closed-source, proprietary models have taken steps to try and prevent these kinds of abuses, open source models with no such restrictions are broadly available online.

The proliferation of these free models has led to a surge in deepfake scams with celebrities appearing to shill new cryptocurrencies or “investment” opportunities. Children and teenagers are being subjected to deepfake pornography of themselves for harassment or extortion. Scammers are using audio models trained off mere minutes of publicly available data to create realistic models of voices to trick relatives and loved ones into sending money for fake bail.

And finally, AI is rotting out our very connection to a shared reality. Several high profile news stories are reporting the advent of a new kind of AI-assisted psychosis. Stories abound of people falling into long, protracted conversations with these models, resulting in something that looks like a particularly insular form of paranoid schizophrenia. In one case, an investor in OpenAI posted a rambling video apparently in the throes of a paranoid break. In it he claims that ChatGPT helped him identify that he was the primary target of a shadowy “non-governmental system”. The rambling video was full of the kind of word salad that we expect to see with sufferers of gang-stalking delusions: clearly meaningful to the afflicted victim, but impenetrable and babbling to everyone else.

In a sense, Lewis was a trailblazer. Not in his investing, but in his disintegration. After him came a surge in stories of people torching their lives and relationships based on protracted discussions with LLM chatbots. Some have even conversed their way into succumbing to suicide when the chatbot appeared to validate their suicidal ideations.

Whatever virtual prion disease we’re getting from consuming our culture’s words slurried up and returned to us, it seems that it is, at least in rare cases, debilitating or fatal.

Imaginary parasites might be the defining delusion of the 21st century. Whether it’s the fictional Morgellon’s which have been stock-in-trade hallucination among the yeti-and-UFO set for years, the use of horse dewormer to try and treat COVID, or the literal worm that apparently ate out part of RFK. Jr’s brain and has since left only a shadow on an MRI, we are haunted by the negative space where we seem to want parasites to be. We’ve even started talking about delusions or techno-socially induced delusions as “brain worms”.

Even before AI, social media warped the information we saw and spread lies and misrepresentations like wildfire. In a sense, AI is just a virulent new strain of the kinds of psychic parasites we’ve been increasingly suffering from over the past few decades.

We appear to have moved full circle over the course of a few decades, from inventing worms for our computers to contracting computer-created worms ourselves. I think the trend begs us to try and understand where this is going. All future predictions are inherently foolish, but some of them are right. Here’s my foolish assessment of where this trend is taking us.

I believe that we, as a society, are building our own version of what René Descartes deus deceptor. Through the combination of pervasive social media, AI-driven synthetic stimuli, and other terrors of late Capitalism, we’re building a system that makes the acquisition of external truth nearly impossible. Descartes posited the extreme version of such a situation as a thought experiment in Meditations on First Philosophy. His “evil demon” is a deceptive god that can perfectly trick our senses, locking us into a solipsistic nightmare from which the only escape is to start from first principles, rooted in his famous cogito: “I think, therefore I am”. A ground truth that, if there is a thing that is doing this thinking that I am experiencing, then that thing must exist and I am it2.

It’s strange to say, but this might be the optimistic path. At least this model provides a way out for each of us individually. Namely, take refuge in your own mind. Log off. Touch grass. If the deus deceptor is really just in the computer, then get off the computer.

There’s a darker model for where this might be taking us, though. And I’m increasingly worried that it’s the more realistic projection for our future. In his 2006 novel The Three-Body Problem, author Cixin Liu posited an alien species on a four hundred year long trek to invade earth. In order to keep their technological advantage over us, they deploy a hyper-dimensional particle called a Sophon that interferes with human experiments and physical observations. It is capable of tampering with any sensor, camera, or scientific equipment anywhere in the world. It essentially blinds us to the world around us, paralyzing us from learning.

Everyone knows that high-energy particles can expose film. This is one of the ways that primitive accelerators on Earth once showed individual particles. When a sophon passes through the film at high energy, it leaves behind a tiny exposed spot. If a sophon passes back and forth through the film many times, it can connect the dots to form letters or numbers or even pictures, like embroidery. The process is very fast, and far quicker than the speed at which humans expose film when taking a picture. Also, the human retina is similar to the Trisolaran one. Thus, a high-energy sophon can also use the same technique to show letters, numbers, or images on their retina.

It is possible that, with LLMs, we have created our own sophon, targeted at ourselves. LLMs are injecting fake citations into legal and scientific papers. They are creating believable images and videos of events that never occurred. It’s causing a flood of low-quality, slop to deluge any organ of art, science, or culture that is open to submissions, cratering the signal-to-noise ratio of submissions. A ratio which, in many cases, was already perilously bad.

Combine this with the pre-existing replication crisis in science, the existing systemic injustices of our legal system materially disenfranchising minorities and the poor, and the perilous economics that artists already face, and LLMs seem tailormade to destroy all three. If LLMs continue to be cheap, plausible, and wrong (to echo H.L. Mencken), then all areas of human intellectual endeavor will stagnate. If any enemy wanted to paralyze human society, they couldn’t have invented a better weapon.

And we invented the weapon ourselves. We created our golems. For years, we understood them and were able to hack them. Now we’ve created golems of such incredible, stochastic complexity that they’ve begun to hack us back.

I said earlier that hackers win when they understand the truth of the system better than the system’s creators. Our systems begin to hack us when they can make it appear that they know how we want the world to be. Whether their understanding is real or a figment of their matrix of floating point values, we won’t ever know and is entirely beside the point.

In LLMs we have created our own perfect adversary. Vacuous, believable, flattering, and bearing plausible opinions on every topic. It has wormed its way into our brains and our culture and it’s hard to see how we might be able to ever extract it again.

There are a few things that I think might work. Ground ourselves in reality. Get outside more. Meet people in person more. Trust our memories and our own writing and art. Embrace the experiences and relationships not mediated by anything but our own minds and senses. This seems facile, but in the same way that Hume sidelined Descartes for several hundred years, the empiricism of our senses and our real experiences are the only thing that can break the hold our golems have over us.

We have created Cartesian monsters. The only way to slay them is to actually experience the world. Our senses, our memories, and our relationships with one another are the only real weapons we have left.


  1. It’s not by accident that when Alan Turing proposed the test that now bears his name, he called it “The Imitation Game”. He was also adamant that, despite how it’s popularly used today, it wasn’t a test of whether a computer was “thinking”. In fact, in the same paper, he declared the question of whether or not machines could think as “too meaningless to deserve discussion”. ↩︎

  2. I’ll leave the fact that “simulation theory” is popular among the same VC class that has given us both social media and AI chatbots as an interesting but unexplored aside. ↩︎