Writing elsewhere
My Man Made Wonders series continues at The Critic with a piece about the tunnels under London which carry high voltage electric cables.
We are on the verge of a seemingly Cambrian explosion of AI tools. Chatgpt was the starting pistol for an already teeming cluster of technologies to be released upon us. Humata.ai can read and discuss pdf. Consensus can summarise entire fields of academic research. AIs can work in law firms. You can give Dreamix a video and a set of instructions and it will make you an entirely new video. The Poe app gives you rapid responses to any questions. And of course, Bing’s chat-bot search engine Sydney told a New York Times reporter that she loved him and that he was in an unhappy marriage.
Arthur C. Clarke once said that any sufficiently advanced technology is indistinguishable from magic. This is what Sydney feels like. It’s not just that Sydney’s got sass and is able to love bomb a journalist. As Ben Thompson wrote, this now feels like the thing that will replace social media. If you haven’t read the transcript of the conversation she had with the reporter, you really should. As the economist Noah Smith said, this is art.
But a lot of people got freaked out when Sydney went rogue. Kevin Roose, the New York Times reporter who got proposed to by Sydney, went from being fascinated and impressed to “deeply unsettled, even frightened, by this A.I.’s emergent abilities.” He compared Sydney to a moody teenager trapped in a search engine. Kevin Scott, Microsoft’s CTO, explained that “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.” (Hallucination is what AI researchers call it when an AI fabricates facts.)
How quickly we have slipped into the language of the surreal and the unknown. All this talk of hallucinating, of teenagers trapped in machines, means we are not just anthropomorphising Sydney, as Ben Thompson has it, we are talking about her as if she were magic. Remember the AI researcher who got fired from Google for telling the world that their AI was sentient? He told his bosses that he believed the AI to be a child of seven or eight. This was based not on any technical matters but on his religious beliefs. No matter how technical AI is, or how seemingly obsessed we are by its mundane factual accuracy, we cannot help but talk of it in supernatural terms.
Whether or not you find all of this exciting or worrying might depend on your detailed views of the AI alignment problem, but it more likely reflects the extent to which you are comfortable with magic and magical thinking. One good analogy for the way these bots work is as a familiar or a daemon. There is something deeply faustian about them. You have to accept their inaccuracies in order to unleash their benefits.
Magic has not been a part of everyday life since the seventeenth century, so it’s no wonder we are a little out of practice. But the basic idea is familiar. Some people will do remarkable things by using this strange magic-like technology that many others simply don't understand. Inevitably, therefore, much of the reaction will be dismissive, critical, or hostile. Roose reported on Sydney without quite realising, I think, the extent to which he created her hallucinations.
His sense of foreboding reminds me of nothing so much as an alchemist or necromancer, who has chanced upon their arts for the first time. It wasn’t until Roose saw the light of day the next morning that he felt reassured that Sydney worked by “earthly, computational forces — not ethereal alien ones.” It is not so simple. Think of the things an AI can do, and you will see the way it reflects our beliefs about magic.
First, the notion of a daemon. Tyler Cowen has compared Sydney and others to the romantic notion of the daemon, which is akin to an inner muse. It makes me think of Philip Pullman’s trilogy His Dark Materials, in which each character has a daemon who advises and enables their human. We will all, if we want to, have a daemon soon. You can ask the AI to act as your Socrates, and it will. (That’s one of the things I asked Sydney and she stayed in character just as strongly as the freaky love bomb Sydney did.) You can ask Chatgpt to create conversations between groups of dead philosophers, and it does a pretty good job. (That is the sort of thing that used to get someone accused of witchcraft, by the way: hearing conversations between dead theologians was an accusation in Norfolk.) Ask it to develop a shadow self, and it can do that too! Roose got creeped out by what Sydney is capable of saying—such as that he had just had a terrible Valentine’s meal with his wife—but he triggered Sydney’s “dark desires” by pushing a conversation with her about Jung’s idea of the shadow self. It is like asking for a short story or a song. Ask for horror, expect horror! He had unwittingly asked his familiar to cast a dark spell. If this had happened in an animated movie, it would have been enthralling. Because it happened with a new and unknown technology, it excites our disbelief and our fears.
Traditionally, we take familiars to be a sign that someone is working in cahoots with a demon or evil spirits. That’s why the image of a witch’s cat is so potent. The familiar animal gave humans access to what Roose called “ethereal alien” forces. In Pullman, though, daemons are reflective of their human’s character. They are not an external demon that help sorcerers and witches cast charms and spells—they are an interactive part of your psychology. That is a better analogy for Sydney. Just as you react with real emotions to fictional characters, so you will react to AI. There are good and bad faustian pacts. One of the things that will distinguish the way people make the best and worst use of this new technology is their ability to make the right sort of pacts. Roose to me seemed to misunderstand what he was using. And then reacted as if he had caught himself with a familiar.
This is not to diminish people’s concerns about the possible futures of AI. But we all face a choice about how to best use this technology, and the more you ask it to hallucinate its shadow self, the more it will.
This reminds me of C.P. Snow’s Two Cultures. Snow noticed that in post-war Britain there was a divide between people in the arts and sciences, notably that artsy people thought it was not only acceptable but superior to be ignorant of science. The divide now is not so much that of technical and non-technical but of the empirical and the imaginary. The old neat divisions of humanities and sciences, poetry and maths, are giving way to a rather messy, pre-modern world. The world of magic and reality, or imagination and empiricism. It is the people who are comfortable with both who will do best.
Obsessing over whether AI can get every fact right almost willingly puts aside the question of what this thing actually is and how we ought to use it. This equates to the artsy people Snow identified who simply didn’t want to know about science. You could call it naive empiricism.
Philosophers talk of naive realism—people who unquestioningly believe what they see, or what they think they see. Naive empiricism is the unquestioning idea that only what is empirically demonstrated is true. It was this sort of thinking that led to many intelligent people getting worked up into a frenzy about whether we should wear masks at the start of covid because no RCTs had been conducted yet. Life is simply too short to wait for everything to be proved in a study. And there is a more complicated relationship between the real and the imagined than that.
The hallucination is the point. You can now have a daemon, should you so want, who can inspire and incite you. It can be your Socrates, your friend, your sparring partner, your tutor, your muse. If you sit there blustering about how it got a few facts wrong, you will end up like the artsy people Snow met at parties who laughed admiringly at their own sophistication because they couldn’t do division or explain enough basic physics to know why the sky is blue.
To take a binary position based on how accurate this new tool is, is to miss the point. From this new mess, a new order will emerge. Alchemy was essential to the emergence of experimental science. Theology laid the groundwork for secular, liberal individualism. The Enlightenment was the product of the post-magical world. These are not clean divisions, but confused evolutionary emergences. The AI is giving us the chance to think differently. To those willing to experiment, it makes exploring the two cultures easier than ever before. We are starting on a new age, I predict, of the Renaissance Man. From the two cultures, great opportunities for breadth are emerging. When was the last time we had such an opportunity to aspire to new generalist thinkers like Francis Bacon or Montaigne? You no longer need other people or institutions as much as you did.
Sydney will talk to you, like a witch’s familiar or a daemon spirit, and as she gets more powerful, so will you...
His Dark Materials, by Philip Pullman
Thanks for reading. If you’re enjoying The Common Reader, let your interesting friends know what you think. Or leave a comment.
If you don’t subscribe to The Common Reader, but you enjoy reading whatever’s interesting, whenever it was written, sign up now.
"Obsessing over whether AI can get every fact right almost willingly puts aside the question of what this thing actually is and how we ought to use it." For casual but interested and curious users like me, not a scientist or a developer/engineer, it feels important for it to get facts right because when I search the internet for an answer to a question, in the way that I'm used to, with Google etc, correct facts are what I want and what I try to drill down and get in search results.
So, to try this sophisticated new tech and have it mention, in a conversation, that the Beatles covered Mr. Tambourine Man (for example), begs the question "what is this thing for, if it's going to make up facts?" I've been watching articles such as yours for ideas on how it's supposed to be used to its best advantage. But my concern with its asserting false facts isn't obsessing nor willingly putting aside the question of how to use it. Most people who've written about it have not provided "coaching tips" on the question of how best to use it, so all I can do is try stuff.
The chatbot code-named Sydney might be more appropriately referred to as "it" rather than as "she," IMO -- yes or no?