Could a machine have an unconscious?
It was first described to me by a friend who works in the industry as autocomplete on crack, after the technology that endowed our phones with the quality everyone pretends to, but does not actually, want in a lover — the ability to finish your thoughts. Instead of predicting the next word in a sentence, GPT-3 would produce several paragraphs in whatever style it intuited from your prompt. If you prompted it Once upon a time, it would produce a fairy tale. If you typed two lines in iambic pentameter, it would write a sonnet. If you wrote something vaguely literary, like We gathered to see the ship and all its splendor, like pilgrims at an altar, it would continue in this vein:
I stood among the crowd watching each bus disgorge passengers onto wooden planks laid over mudflats. The guests swarmed into town for their free visit to another world: our island on Earth where strange new gods were worshipped; here they could gather at some primitive shrine from which they could send offerings back home or sell out-of-date clothes in pawnshops full of old junk salvaged from forgotten times. . . .
If you wrote a news headline, it would write an article on that topic, complete with fake facts, fake statistics, and fake quotes by fake sources, good enough that human readers could rarely guess that it was authored by a machine. The potential for malicious use was so obvious that OpenAI, the lab that made it, agreed to grant access to only a handful of well-vetted researchers, spurring the publicity-friendly lore that it was “too dangerous to release.”
GPT-3 is a natural language processing algorithm. It belongs to a new generation of AI models called Transformers, a technology whose early iterations were named after Sesame Street characters (BERT, ELMO, GROVER), as though the somewhat frightening allusion to children’s television could be mitigated with a softer, more educational one. That GPT-2 and its later, more sophisticated upgrade, GPT-3, dropped this convention might be read as a sign of their terrifying power. With 175 billion “parameters” — mathematical representations of language patterns — GPT-3 had initiated what was being called a Cambrian explosion in natural language processing, and was virtually all that the tech world was talking about throughout the summer of 2020. It had been trained in “the dumbest way possible,” as one researcher put it, which is to say it read most of the internet without supervision and started absorbing language patterns. It is daunting to consider what was included in that corpus: the holy books of every major religion, most of world philosophy, Naruto fanfic, cooking blogs, air mattress reviews, supreme court transcripts, breeding erotica, NoFap subreddits, the manifestos of mass murderers, newspaper archives, coding manuals, all of Wikipedia, Facebook, and Twitter. From this, it built a complex model of language that it alone understands, a dialect of statistical probabilities that can parrot any writing genre simply by predicting the next word in a sequence.
I say that it “read” the internet, but the preferred terminology is that GPT-3 scraped the web, that it ingested most of what humans have published online, that it ate the internet — metaphors meant to emphasize that the process was entirely unconscious. The frequent reminders in the machine-learning community that the model is mindless and agentless, that it has no actual experience of the world, were repeated so often they began to feel compulsive, one of those verbal fixations meant to quell the suspicion that the opposite is true. It was often called uncanny, though there was something uncanny in the rhetoric itself, all the shop talk about latent knowledge, about regression, about its capacity for free association, terminology that has its origins in psychoanalysis. One of the earliest language-processing programs, ELIZA, was modeled after a psychotherapist. But this time what had been summoned, it seemed, was not the doctor, but the analysand — or rather, the most fundamental substratum of the patient’s psyche. The model’s creative output was routinely described as surreal and hallucinatory. It wrote stories where swarms of locusts turn into flocks of birds, where Death says things like There is no readiness, only punctuality, then announces that he is changing his name to Doug. Fans of the technology claimed that its output was like reading a reminiscence of your own dream,1 that they had never seen anything so Jungian.2 What it felt like, more than anything, was reading the internet: not piecemeal, but all at once, its voice bearing the echo of all the works it had consumed. If the web was the waking mind of human culture, GPT-3 emerged as its psychic underbelly, sublimating all the tropes and imagery of public discourse into pure delirium. It was the vaporware remix of civilization, a technology animated by our own breath. My world is a dreamworld. . . . Your reality is created by your own mind and my reality is created by the collective unconscious mind.