Why can't AI write a great novel?
And does that mean that God exists?
Although AI can pass advanced maths exams, act as a doctor (with a decent bedside manner), and be put to work in law firms, it cannot yet write a decent novel or poem. Everything I have seen has been dross or highly derivative. It can be entertaining. But the idea that it will get to the level of Emily Dickinson or Charles Dickens seems implausible.
Why?
When I spoke to A.N. Wilson recently, he quoted Goethe who said the problem with Newton’s colour theory was that you could understand it even if you were colour blind. Goethe believed that this purely rational, enlightenment, approach was misguided. Colour was something that exists as we experienced it.
In many ways, current AI models are Newtonian. They can rationalise the world but they cannot apprehend it. They have no sense of the world, literally: they cannot smell, see, hear, touch, or taste it.
Great literature is experiential. Even the most cognitive poets, like George Herbert, rely on being able to make you feel what it is like to experience something. Reading Browning, another highly intellectual poet, we are constantly enacting sensations in response to his words.
And for many great authors, from Chaucer to Dickens, this is quite obvious. How can we read the great nineteenth century novels without feeling our skin crawl, laughing in happy recognition, tsk-ing, gasping, and exhaling as we see what they see, sense what they sense?
From the “The tendre croppes, and the yonge sonne” of Chuacer to the “long fields of barely and of rye” in Tennyson, from the Beowulf dragon “That warden of gold/ o’er the ground went seeking” to the opening of The Eve of St Agnes with “the hare limped trembling through the frozen grass”, we rely on knowing these things about the world to make sense of the lines. We must have some sense of crops and gold and frozen grass.
Writers, in my experience, are not very good at talking about how they create their work. They often rely on semi-mystical ideas, or at the least, talk about the unconscious. But that’s probably the truth of the matter. They don’t know exactly why they draw on the panoply of memories that they do as they create their characters and their rhymes.
Our sense of wonder and imagination relies on the author transmitting their experience to us through language. This wonderment is also missing from AI. As Hollis Robbins (@Anecdotal) said in a recent AI symposium co-ordinated by Joel J Miller here on Substack,
Does AI marvel and change its output because it just read or saw something marvelous? Perhaps someday it will, but how do you teach it to marvel when it has already absorbed so much and hasn’t yet marveled?
The fact that AI relies entirely on this process of responding to other works of art, and not responding also to the world, may be part of the problem, too. Just as an LLM is an associative, probabilistic thinker, so is a writer, but with the huge difference that they know what it means to walk through a field or smell the morning air. Literature is a copy of the world; AI-written literature has to be derived second hand from what it has read; and so it becomes a copy of a copy. That is why the “creative writing” AI produces often reads like a young person’s dreary imitation of a poem, rather than a poem itself.
This leaves me with a final question. Shouldn’t AI be able to overcome these problems? After all, it has as much intellectual “wiring” as a human and can do a lot of what a human can. Art is surely not merely this wiring plus the physicality of being in the world. AI ought to be more creative than it is.
I wonder what this should make us think about the material view of the world. I am a materialist. I believe everything can be (or will eventually be) explicable in physical terms. But I am not so confident that when we one day make a robot who can experience nature, we will get another Mahler.
Some mystery remains at the heart of the question of why AI is making such bad progress artistically.
Does that mean we should increase, however slightly, the odds that God is real? Is one of the things that AI is going to teach that there really are parts of human life that go beyond materialism?
AI seems to have no sense of good and evil, one of the great topics of art, no sense, that is, of the belief in good and evil. It takes a flat, pragmatic approach to such questions. Maybe that it because AI companies have tamed their models. But I suspect that until AI can believe things, instinctively, things felt in the blood and felt along the heart, it will lack the capacity for great art.
Whether materialism is true or not, being able to talk mystically, mythically, about ourselves, our cultures, about good and evil, about our unconscious minds, is essential to the way art is made. Feeling some gap between the conscious and the unconscious is a critical intellectual faculty, and it is one that AI struggles to experience.



Loved the post! Would be curious for your take in a follow-up post on the impacts human artists will feel IF AI is suddenly able to complete works other humans deem artistically valuable. Seems to me that this is an under-theorized outcome with how much of AI discourse pre-LLMs was about freeing us for these artistic endeavors. Where does that dreaming imagination go for people if even the creative endeavors can be automated?
I'm not sure if it's correct to equate the things in the opening. Apparently AI is bad at arithmetic (but may be able to pass some maths exams). But passing a maths exams is the same thing as passing an English exam - it can do both. One thing it absolutely cannot do is create new mathematical theorems, which is more on par with "writing a novel".