My biggest problem with writing for AI is that AI still likes its own writing the best. Every essay feedback I ask for has it pushing to make things more anodyne, bullet-point, empirical, anti-lyrical, business memo of an article with no digressions or explorations which is what makes good writing good! All for the theme of doing so, just noting this annoyance.
I think the idea of writing for immortality into a statistical machine is interesting, when you compare it John Keats trudging through the moors thinking about immortality in verse or any other Romantic. I suppose all of us writers are seeking it in some form or another. Though, it's hard to imagine asking for the 'best unknown' writer right now, since so much of an LLM output is is based on built up pattern recognition. Hence every single time I ask for a non-traditional literary chapter I get back Jennifer Egan's powerpoint chapter in A Visit From the Goon Squad. It's maddening - every single time. The model can't go to the edges of its training data to get those unknown gems, but in a few years, who knows?
idiocy on many levels; least but decisive: because present AI is a passing capitalist scam. Anything like actual artificial intelligence will be quite different. Read Gary Marcus or Ed Zitron
Hmm, interesting in theory, but the AIs for whom one would wish to write or who could “like” something in a critical sense don’t yet exist. Furthermore, those that do exist aren’t yet on a pathway to becoming those AIs for whom one would wish to write. It’s one thing to be optimistic on AI in general, but this doesn’t feel like it’s engaging with how actually-existing AI works.
Even if you enjoy and are optimistic about the current LLMs that we have, it is not denial to suggest that these models aren’t capable of appreciating or dealing critically with great writing yet, let alone recommending it to human readers. This is not the view of grumpy writers, but of many technologists (e.g. Gary Marcus, who has been mentioned).
I suppose I should ask, what feature of actually-existing AI in the form of LLMs makes you see a pathway to a future of thoughtful, critical AIs for whom a human may want to write, may want to make a connection with? I’m yet to be convinced by yours or Dan’s excitement, at least on this aspect of AI.
When you ask a question or provide a prompt, the LLM generates a response by calculating the most statistically probable sequence of words that should follow, based on the patterns it has learned.
It is a sophisticated prediction engine, there is no comprehension. It does not "understand" the meaning behind the words in a human way. It processes symbols based on their statistical co-occurrence. It doesn't "know" anything.
This question asks for more than a comment, but I think it's the latest chapter in the history of how new technologies have changed the way that people create (the rise of the three-minute single due to 7-inch vinyl records, Tiktok's making hooks arrive earlier in songs, etc). On the plus side, as you say writing for AI increases your potential reach and influence: discoverability and building a readership is still one of the main issues for many writers. The danger is that this new mediating layer between writers and readers has an inevitable deadening influence on prose style, craft and taste levels (as Rohit mentions). I worry about a world where everyone reads the summary rather than the actual writing. Two counterpoints, both from Japanese writers:
'One opposite of imagination is efficiency.' - Haruki Murakami, The Novelist as Vocation
'And so we distort the arts themselves to curry favour with the machines.' - Junichiro Tanizaki, In Praise of Shadows
Almost every era brings people reacting to how a new technology affects art and society, often in quite reactionary ways (evidence: https://pessimistsarchive.org/). I don't, however, believe that its age renders the point Tanizaki was making automatically obsolete......
“LYMAN: Well, insurance is basically comical, isn’t it?—at least pathetic.
LEAH: Why?
LYMAN: You’re buying immortality, aren’t you?—reaching out of your grave to pay the bills and remind people of your life? It’s poetry. The soul was once immortal, now we’ve got an insurance policy.
LEAH: You sound pretty cynical about it.
LYMAN: Not at all—I started as a writer, nobody lusts after the immortal like a writer.”
The Ride Down Mt. Morgan, Scene 3 (1991), Arthur Miller
Yet another cloyingly uncritical take on AI and writing. As if it wasn’t enough that people are using LLMs to write — and in so doing denaturing the human thought, experience and emotion that goes into the craft — Mr. Oliver espouses writing for an audience made up entirely of LLMs, creating some kind of perverse ouroboros of capitalist futility. One can only deduce that he has very little appreciation for the value of the written word.
Writing for AI reminds me of writing to enhance searchability. "Quick - add keywords to the first sentence...the first paragraph," the marketing mavens advise. Yet, squeezing them in can sometimes turn an elegant phrase into something cumbersome. Alas. it is the wave of the future...and I do like AI for research. It's particularly good at elucidating industry knowledge.
My biggest problem with writing for AI is that AI still likes its own writing the best. Every essay feedback I ask for has it pushing to make things more anodyne, bullet-point, empirical, anti-lyrical, business memo of an article with no digressions or explorations which is what makes good writing good! All for the theme of doing so, just noting this annoyance.
Do you ask Claude?
I ask all three, dislike them all for different reasons, but pushing to the median is common to all, albeit in different ways.
I find that I pick and choose the feedback I take, which is often the case with human editors and writers
Yeah same, just … tired of mediocrity not just being offered but preferred
I think the idea of writing for immortality into a statistical machine is interesting, when you compare it John Keats trudging through the moors thinking about immortality in verse or any other Romantic. I suppose all of us writers are seeking it in some form or another. Though, it's hard to imagine asking for the 'best unknown' writer right now, since so much of an LLM output is is based on built up pattern recognition. Hence every single time I ask for a non-traditional literary chapter I get back Jennifer Egan's powerpoint chapter in A Visit From the Goon Squad. It's maddening - every single time. The model can't go to the edges of its training data to get those unknown gems, but in a few years, who knows?
idiocy on many levels; least but decisive: because present AI is a passing capitalist scam. Anything like actual artificial intelligence will be quite different. Read Gary Marcus or Ed Zitron
Hmm, interesting in theory, but the AIs for whom one would wish to write or who could “like” something in a critical sense don’t yet exist. Furthermore, those that do exist aren’t yet on a pathway to becoming those AIs for whom one would wish to write. It’s one thing to be optimistic on AI in general, but this doesn’t feel like it’s engaging with how actually-existing AI works.
Even if you enjoy and are optimistic about the current LLMs that we have, it is not denial to suggest that these models aren’t capable of appreciating or dealing critically with great writing yet, let alone recommending it to human readers. This is not the view of grumpy writers, but of many technologists (e.g. Gary Marcus, who has been mentioned).
I suppose I should ask, what feature of actually-existing AI in the form of LLMs makes you see a pathway to a future of thoughtful, critical AIs for whom a human may want to write, may want to make a connection with? I’m yet to be convinced by yours or Dan’s excitement, at least on this aspect of AI.
Exactly.
When you ask a question or provide a prompt, the LLM generates a response by calculating the most statistically probable sequence of words that should follow, based on the patterns it has learned.
It is a sophisticated prediction engine, there is no comprehension. It does not "understand" the meaning behind the words in a human way. It processes symbols based on their statistical co-occurrence. It doesn't "know" anything.
Is an LLM even *capable* of "liking" something? (Per your Kagan-Kans quote that some future superintelligence might "like" your writing..)
This question asks for more than a comment, but I think it's the latest chapter in the history of how new technologies have changed the way that people create (the rise of the three-minute single due to 7-inch vinyl records, Tiktok's making hooks arrive earlier in songs, etc). On the plus side, as you say writing for AI increases your potential reach and influence: discoverability and building a readership is still one of the main issues for many writers. The danger is that this new mediating layer between writers and readers has an inevitable deadening influence on prose style, craft and taste levels (as Rohit mentions). I worry about a world where everyone reads the summary rather than the actual writing. Two counterpoints, both from Japanese writers:
'One opposite of imagination is efficiency.' - Haruki Murakami, The Novelist as Vocation
'And so we distort the arts themselves to curry favour with the machines.' - Junichiro Tanizaki, In Praise of Shadows
In Praise of Shadows was written in 1933! How distorted were the arts for the last century?
Almost every era brings people reacting to how a new technology affects art and society, often in quite reactionary ways (evidence: https://pessimistsarchive.org/). I don't, however, believe that its age renders the point Tanizaki was making automatically obsolete......
“LYMAN: Well, insurance is basically comical, isn’t it?—at least pathetic.
LEAH: Why?
LYMAN: You’re buying immortality, aren’t you?—reaching out of your grave to pay the bills and remind people of your life? It’s poetry. The soul was once immortal, now we’ve got an insurance policy.
LEAH: You sound pretty cynical about it.
LYMAN: Not at all—I started as a writer, nobody lusts after the immortal like a writer.”
The Ride Down Mt. Morgan, Scene 3 (1991), Arthur Miller
Hi there, i posted a long-ish reply over at the debate. probably should've posted it here.
https://open.substack.com/pub/commonreader/p/ai-and-the-future-of-literature?utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=173171555
Yet another cloyingly uncritical take on AI and writing. As if it wasn’t enough that people are using LLMs to write — and in so doing denaturing the human thought, experience and emotion that goes into the craft — Mr. Oliver espouses writing for an audience made up entirely of LLMs, creating some kind of perverse ouroboros of capitalist futility. One can only deduce that he has very little appreciation for the value of the written word.
Is the source "American Conservative" or "American Scholar"?
Whoops
Writing for AI reminds me of writing to enhance searchability. "Quick - add keywords to the first sentence...the first paragraph," the marketing mavens advise. Yet, squeezing them in can sometimes turn an elegant phrase into something cumbersome. Alas. it is the wave of the future...and I do like AI for research. It's particularly good at elucidating industry knowledge.