Is it unliterary to oppose AI?
My "position" on AI
The opinion of mine that is least well represented by other people is about AI. Normally I would not write about such a thing. Who cares how well my opinions are represented? But I think it is part of a bigger perspective problem to do with literature, writing, and AI.
People often summarise me as if I am “pro” AI. In fact, as I said to Sam Khan, I have very mixed feelings (hence it is hard to summarise my view, as I don’t have “a” view…):
AI is not all good news. As I wrote, every technology is a Faustian pact. The printing press unleashed all sorts of disorder onto the world! But when I see people say that AI is all nonsense, I assume they are not reading the right sources.
Isn’t this the literary way of thinking—shouldn’t we have some “negative capability” about this issue?
Don’t you find it odd that literary people—who believe that literature helps us see many perspectives, be more open minded, and so on—are in a state of near ideological uniformity about AI? Why isn’t it more normal to spilt out issues of technical capability vs. issues of the social role and meaning of AI writing?1
I am not arguing that anyone’s opinion is invalid. But I do see a lot of people with a “newspaper first” view of things using that as the frame to assess everything to do with AI.
You can see this in what Sam Kriss recently wrote in the NYT,
Early in “To the Lighthouse,” Virginia Woolf describes one of her characters looking out over the coast of a Scottish island: “The great plateful of blue water was before her.” I love this image. A.I. could never have written it. No A.I. has ever stood over a huge windswept view all laid out for its pleasure, or sat down hungrily to a great heap of food. They will never be able to understand the small, strange way in which these two experiences are the same.
Obviously AI is not, currently, good at literary writing. But never? Never? This is claiming too much. AI is not concluded. We are going to find out what it is. It is fine to be opposed to that finding out, but that should be acknowledged as, in some perhaps strange way, rather an unliterary position to take.


Well said, an interesting point.
I wrote about this: "The problem with slop isn’t the slop. It isn’t even the fact that AI was used. After all, tools don’t commit crimes; people do.
The problem with slop (especially in writing) is that the writer doesn’t care enough about the reader to make the reader’s life easier.
That’s the whole job.
You see, writing is an act of respect. You sweat the small stuff so your reader doesn’t drown in it. You spend the hours and the blood and the rewrites and the self-loathing and the tears so your reader can glide—effortlessly—over a surface that took you months to sand smooth. A good sentence is a sheet of ice slowly, secretly melted down from years of someone else’s hard labor. The reader skates; the writer bleeds.
And slop is what happens when nobody bleeds."
More: https://www.whitenoise.email/p/slop-is-contempt
If all AI did was evolve a more literary style or solve significant medical and scientific problems, everything would be hunky-dory. But that's not the situation. There are significant risks inherent in this technology. It's not just my view that these are dire risks, it's a position taken by many who developed the technology.