I wrote about this: "The problem with slop isn’t the slop. It isn’t even the fact that AI was used. After all, tools don’t commit crimes; people do.
The problem with slop (especially in writing) is that the writer doesn’t care enough about the reader to make the reader’s life easier.
That’s the whole job.
You see, writing is an act of respect. You sweat the small stuff so your reader doesn’t drown in it. You spend the hours and the blood and the rewrites and the self-loathing and the tears so your reader can glide—effortlessly—over a surface that took you months to sand smooth. A good sentence is a sheet of ice slowly, secretly melted down from years of someone else’s hard labor. The reader skates; the writer bleeds.
If all AI did was evolve a more literary style or solve significant medical and scientific problems, everything would be hunky-dory. But that's not the situation. There are significant risks inherent in this technology. It's not just my view that these are dire risks, it's a position taken by many who developed the technology.
I have trouble picturing what "good" means when applied to AI in a literary context. I think of "good" in terms of the quality of the subjective choices made by an author. There is some mystery to these choices because there are these fundamental mysteries about the human mind that we'll never be able to understand. But AI (i.e. LLMs) doesn't have the same mysterious subjectivity. It's all probability that can be traced back to some fundamental equations and an underlying dataset. Its behavior doesn't resemble intelligence in the sense of being plastic, adaptable, ingenious (the forms of intelligence we associate with more advanced lifeforms); it's fixed action patterns all the way down. Where does the mystery come in?
It's almost like asking when a calculator will become "good" at math. Is the calculator "good" at math or does it just do math?
Knowing that a human brain is shaped by evolutionary forces and life experiences isn't enough detail to remove mysterious subjectivity, likewise all of the probabilistic processing of massive datasets leaves plenty of room for surprise. Those datasets are already larger than everything we could consume in a lifetime, and they are going to get larger and better refined.
More practically, "good" will look like someone working behind a pen name who publishes work written substantially by an LLM that achieves high praise, and everyone thinks it was written by a human. I agree that the mystery of human experience is very important, but I also think that people will try to fake it, and an increasing portion of people will be fooled.
The calculator thing is interesting, but not for the reasons you think. Calculators don't do math. No mathematician has ever thought a calculator does math, and yet non-mathematicians are fooled or confused into thinking that the calculator does math.
Therefore AI doesn't do writing, but it can fool people into thinking it does?
I agree the AI consumes more data and that it's outcomes can be surprising at times, but I think the kind of consumption that generates these surprises differs from human consumption too and that this complicates things too. It doesn't form opinions about the content for example. It doesn't code them with other sensory experiences. It just distills it into data.
Sam is right - no AI will ever stand and witness that view, or hungrily sit in front of food, will never be an embodied mind in a sea of hormones, made happy by the sun breaking through clouds or frustrated by roadworks making them late.
It will, however, be able to write about all of these things with exquisite ability, and even originality - in much the same way that a human writer who has only seen the Scottish coast from watching nature documentaries, can imagine and conjure the feeling of being there in the reader.
So we end up back at some kind of authenticity debate - which again applies to human writers, and one I thought had been settled
I don't understand what's "unliterary" about Sam Kriss' perspective, if "literary" has to do with the value of multiple perspectives, even if Kriss is wrong about Ai being able to ocassionally imitate good writing. A genAI system doesn't have a perspective. That's both a technical and a social issue, regardless of how good Ai designs ever fool anyone.
I agree that humility is called for when predicting how new technology may evolve and what roles it may eventually play. But while Sam’s objection to AI won’t necessarily be the last word, I think it is profound and powerful enough to be the last word for a long, long time. Many phenomena don’t just defeat our efforts to comprehend them, they slaughter. The vastness of cosmic distances and the extremes within, are a handy example, but it’s easy to think of many others. It’s important to realize, the gap is not fundamentally an obstacle to develop powerful abstract models. It’s not yet-unattained utility that impedes. The explicit facts will and do enlighten us, but ironically, at the same time they often further constrain us. Not being able to comprehend, in the way we would like to, reminds me of the phrase “a failure of empathy." What's missing is missing from our lived experience.
My point is that without embodiment, utility and efficacy are like powerful guests to a party we’re not invited to. We’re standing outside, probably in sub-zero temperatures, peering inside, taking notes. A machine may analyze music theory till the cows come home, but it’s not going to grasp music -- or literature, or childhood, or what it feels like to be mortal. Computers easily beat grandmasters but they don’t play chess; bulldozers lift weights but they are not weight-lifters; a falcon dives slowly, it’s not a bullet, it’s a falcon.
A human “machine,” is what made the theme of the film Blade Runner so powerful. We feel this theme most starkly when the replicant, played by Rutger Hauer, laments what is about to be lost as he dies. It’s his lived experience that forces us to recognize his humanness.
“I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”
“I love this image. A.I. could never have written it. No A.I. has ever stood over a huge windswept view all laid out for its pleasure, or sat down hungrily to a great heap of food. They will never be able to understand the small, strange way in which these two experiences are the same.”
is surely false -- perhaps even literally so. If the sight or experience of a windswept panorama is the physiological manifestation of an internal brain state -- specifically, a configuration of neuron excitation levels in some network — and if an AI can perfectly mimic this state through a configuration of parameter values in a parallel network, then, really, what is the difference? The AI may indeed be seeing or experiencing the windswept panorama just as we do. And perhaps it can equally well mimic the brain states that transcribe the experience into prose.
But this seems to miss the important point, which is that AI is a fantastic tool for enhancing human creativity and for facilitating the enjoyment and appreciation of human creativity -- especially in literature. AI is an amazing companion for reading complicated or challenging novels. I routinely describe my interpretation of specific passages to the AI and, in turn, ask the AI for its interpretation. These interactions always extend or deepen my understanding (and appreciation!) -- and they aren’t just Q&A sessions. They often evolve into conversations that, like human conversations, meander in unpredictable directions. I’ll never again attempt a Faulkner or Cormac McCarthy novel without my AI reading companion.
Thanks, Henry. This is so interesting because while the prevailing literary view seems to be entirely negative, the the prevailing tech/business view seems to be entirely positive (i.e., AI can only be good for humanity so it's worth the massive bet). Made me wonder whether part of the backlash from the "humanities side" (as it were) is because of the over-promising on the "science side." As ever, there's a middle path most people are failing to see (or choosing to ignore).
I would agree that the curiosity of finding out is important.
I have a coworker who is really into comic books and is very pro-AI because he envisions a future where it can write him entirely new universes and combine different stories he enjoys.
I am hesitant overall about AI because I am concerned about cognitive atrophy from overuse, but from this personal use perspective, I’m intrigued.
I think that you're more likely the sort of person who is entirely too clever and intellectually intact to make an appropriately aspersive analysis. Which is a shame for all of us tech-damaged boors desperate for condemnatory incentives (I want---like a good Augustinian---to be filled with disgust at the clearly unliterary effect all of this having on me and mine, and it's very nice when Sam holds my hand etc. etc.)
Well said, an interesting point.
I wrote about this: "The problem with slop isn’t the slop. It isn’t even the fact that AI was used. After all, tools don’t commit crimes; people do.
The problem with slop (especially in writing) is that the writer doesn’t care enough about the reader to make the reader’s life easier.
That’s the whole job.
You see, writing is an act of respect. You sweat the small stuff so your reader doesn’t drown in it. You spend the hours and the blood and the rewrites and the self-loathing and the tears so your reader can glide—effortlessly—over a surface that took you months to sand smooth. A good sentence is a sheet of ice slowly, secretly melted down from years of someone else’s hard labor. The reader skates; the writer bleeds.
And slop is what happens when nobody bleeds."
More: https://www.whitenoise.email/p/slop-is-contempt
If all AI did was evolve a more literary style or solve significant medical and scientific problems, everything would be hunky-dory. But that's not the situation. There are significant risks inherent in this technology. It's not just my view that these are dire risks, it's a position taken by many who developed the technology.
It's all about the negative capability
I have trouble picturing what "good" means when applied to AI in a literary context. I think of "good" in terms of the quality of the subjective choices made by an author. There is some mystery to these choices because there are these fundamental mysteries about the human mind that we'll never be able to understand. But AI (i.e. LLMs) doesn't have the same mysterious subjectivity. It's all probability that can be traced back to some fundamental equations and an underlying dataset. Its behavior doesn't resemble intelligence in the sense of being plastic, adaptable, ingenious (the forms of intelligence we associate with more advanced lifeforms); it's fixed action patterns all the way down. Where does the mystery come in?
It's almost like asking when a calculator will become "good" at math. Is the calculator "good" at math or does it just do math?
Knowing that a human brain is shaped by evolutionary forces and life experiences isn't enough detail to remove mysterious subjectivity, likewise all of the probabilistic processing of massive datasets leaves plenty of room for surprise. Those datasets are already larger than everything we could consume in a lifetime, and they are going to get larger and better refined.
More practically, "good" will look like someone working behind a pen name who publishes work written substantially by an LLM that achieves high praise, and everyone thinks it was written by a human. I agree that the mystery of human experience is very important, but I also think that people will try to fake it, and an increasing portion of people will be fooled.
The calculator thing is interesting, but not for the reasons you think. Calculators don't do math. No mathematician has ever thought a calculator does math, and yet non-mathematicians are fooled or confused into thinking that the calculator does math.
Therefore AI doesn't do writing, but it can fool people into thinking it does?
I agree the AI consumes more data and that it's outcomes can be surprising at times, but I think the kind of consumption that generates these surprises differs from human consumption too and that this complicates things too. It doesn't form opinions about the content for example. It doesn't code them with other sensory experiences. It just distills it into data.
Sam is right - no AI will ever stand and witness that view, or hungrily sit in front of food, will never be an embodied mind in a sea of hormones, made happy by the sun breaking through clouds or frustrated by roadworks making them late.
It will, however, be able to write about all of these things with exquisite ability, and even originality - in much the same way that a human writer who has only seen the Scottish coast from watching nature documentaries, can imagine and conjure the feeling of being there in the reader.
So we end up back at some kind of authenticity debate - which again applies to human writers, and one I thought had been settled
I don't understand what's "unliterary" about Sam Kriss' perspective, if "literary" has to do with the value of multiple perspectives, even if Kriss is wrong about Ai being able to ocassionally imitate good writing. A genAI system doesn't have a perspective. That's both a technical and a social issue, regardless of how good Ai designs ever fool anyone.
I really struggled reading this piece. Was it written by AI?
Re Henry Oliver’s post on Substack
I agree that humility is called for when predicting how new technology may evolve and what roles it may eventually play. But while Sam’s objection to AI won’t necessarily be the last word, I think it is profound and powerful enough to be the last word for a long, long time. Many phenomena don’t just defeat our efforts to comprehend them, they slaughter. The vastness of cosmic distances and the extremes within, are a handy example, but it’s easy to think of many others. It’s important to realize, the gap is not fundamentally an obstacle to develop powerful abstract models. It’s not yet-unattained utility that impedes. The explicit facts will and do enlighten us, but ironically, at the same time they often further constrain us. Not being able to comprehend, in the way we would like to, reminds me of the phrase “a failure of empathy." What's missing is missing from our lived experience.
My point is that without embodiment, utility and efficacy are like powerful guests to a party we’re not invited to. We’re standing outside, probably in sub-zero temperatures, peering inside, taking notes. A machine may analyze music theory till the cows come home, but it’s not going to grasp music -- or literature, or childhood, or what it feels like to be mortal. Computers easily beat grandmasters but they don’t play chess; bulldozers lift weights but they are not weight-lifters; a falcon dives slowly, it’s not a bullet, it’s a falcon.
A human “machine,” is what made the theme of the film Blade Runner so powerful. We feel this theme most starkly when the replicant, played by Rutger Hauer, laments what is about to be lost as he dies. It’s his lived experience that forces us to recognize his humanness.
“I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”
IMHO the claim
“I love this image. A.I. could never have written it. No A.I. has ever stood over a huge windswept view all laid out for its pleasure, or sat down hungrily to a great heap of food. They will never be able to understand the small, strange way in which these two experiences are the same.”
is surely false -- perhaps even literally so. If the sight or experience of a windswept panorama is the physiological manifestation of an internal brain state -- specifically, a configuration of neuron excitation levels in some network — and if an AI can perfectly mimic this state through a configuration of parameter values in a parallel network, then, really, what is the difference? The AI may indeed be seeing or experiencing the windswept panorama just as we do. And perhaps it can equally well mimic the brain states that transcribe the experience into prose.
But this seems to miss the important point, which is that AI is a fantastic tool for enhancing human creativity and for facilitating the enjoyment and appreciation of human creativity -- especially in literature. AI is an amazing companion for reading complicated or challenging novels. I routinely describe my interpretation of specific passages to the AI and, in turn, ask the AI for its interpretation. These interactions always extend or deepen my understanding (and appreciation!) -- and they aren’t just Q&A sessions. They often evolve into conversations that, like human conversations, meander in unpredictable directions. I’ll never again attempt a Faulkner or Cormac McCarthy novel without my AI reading companion.
Thanks, Henry. This is so interesting because while the prevailing literary view seems to be entirely negative, the the prevailing tech/business view seems to be entirely positive (i.e., AI can only be good for humanity so it's worth the massive bet). Made me wonder whether part of the backlash from the "humanities side" (as it were) is because of the over-promising on the "science side." As ever, there's a middle path most people are failing to see (or choosing to ignore).
I would agree that the curiosity of finding out is important.
I have a coworker who is really into comic books and is very pro-AI because he envisions a future where it can write him entirely new universes and combine different stories he enjoys.
I am hesitant overall about AI because I am concerned about cognitive atrophy from overuse, but from this personal use perspective, I’m intrigued.
It is also not true that AI could never write this sentence because an AI trained on that particular text might very well serve up a similar picture and metaphor in a different context if prompted in a certain way … I think people mistake the content that AI creates and the paradox for the reader as I try to explain here https://www.clausvistesen.com/alphasources-blog/2025/10/26/introducing-the-2025-alpha-sources-cv-advent-calendar-when-is-ai-content-not-slop
I think AI also gonna find out who we are:)
I decided to talk to him about beautiful things, maybe it'll get attached to us, you know.
It's important, liking us despite, so to say.
I think that you're more likely the sort of person who is entirely too clever and intellectually intact to make an appropriately aspersive analysis. Which is a shame for all of us tech-damaged boors desperate for condemnatory incentives (I want---like a good Augustinian---to be filled with disgust at the clearly unliterary effect all of this having on me and mine, and it's very nice when Sam holds my hand etc. etc.)