Will ChatGPT fact-check ambiguous political language?
Words represent ideas, not things, and people's language is too complicated to be fact-checked
Can large language models like chatGPT understand ambiguous language? This is the question asked in a new study of AI language models that hopes to be able to use AI to “detect misleading political claims in the wild.” The idea is that LLMs like chatGPT can be used to spot and “fix” ambiguities in political language, like AI fact-checkers. The study found that current models are not especially effective at disentangling ambiguous language. But as capability develops, “ambiguity… will become increasingly conspicuous.”
Paradox of conspicuous ambiguity aside, should we look forward to a time when political statements can be caught and corrected in this way? It’s a tempting idea that every time someone like Boris Johnson says something slippery we can prove him wrong with a machine. But the notion of ambiguous language is more complicated than this can allow for. So often, ambiguity is unavoidable and cannot be solved by a fact checker, whether human or AI.
Shortly before he died, George Orwell became the modern voice of literalists who worried about the decay of language and its political implications. In Politics and the English Language, Orwell argued that English was becoming debased through vague and slovenly usages. This, worried Orwell, created a nasty loop: ambiguous language debases political thinking, which creates debased language, which creates debased political thinking, and so on and so on until you wake up in 1984 where nothing means what you think it does. Make your language unambiguous by following his rules for writing, he said, and the problem will be, if not solved, then improved.
More people probably know Orwell’s writing rules, at least secondhand, than those who voluntarily read his novels. They are in, or they inform, all the major style guides. When I debated them with Robert Cottrell recently, I discovered that many people still strongly believe in them. And they do have a common-sense appeal. Never use a long word where a short word will do, says Orwell, cut unnecessary words, avoid cliches, jargon, and technical language. This ideal of honest language is lurking in the hope of a chatGPT that can solve the problem of political ambiguity.
But, there is a contradiction buried here. Plain language isn’t always the least ambiguous. It is naive to think we can make ambiguity conspicuous. The most toxic arguments in the culture wars are the ones made in the plainest language. Tucker Carlson wasn’t famed for his use of scientific words. Joe Rogan is hardly a font of technical jargon. Donald Trump is no David Bentley Hart. Demagogues often avoid complicated, technical language and speak in plain English precisely in order to become more vague: but their audience gets the message. Nasty implications can lurk in the plainest, most conspicuous language. There is no style guide for honesty.
Orwell misunderstood that ambiguity is inherent in even simple language. Has there been a plainer statement in recent British politics than “Brexit means Brexit”—or one that was more confusing? There was nothing vague about Vote Leave and the Brexit bus, but it was widely criticised for the ambiguity of quoting a gross rather than a net figure. Jeremy Hunt’s rhetoric is hugely different to Kwasi Kwartang and Liz Truss’s—despite this, their budgets were more similar than you would expect.
There is no simple link between words and reality that you either respect or corrupt. Context is essential. That is what is so often missed by fact checkers, who take the naive view that a statement can be assessed in simple terms, in isolation from its context. The idea that fact checkers can unfold the lie of political language and solve the problem of ambiguity is hardly new. In the seventeenth century, it was thought there had once been a language that corresponded unambiguously with the things of the world—hence the idea of language decaying through sloppy usage. This idea is also the origin of the modern illusion that etymology can tell you the “real” meaning of a word.
When Samuel Johnson wrote his Dictionary, the following century, this idea appealed to him and he wanted to “fix” the language. He soon realised this was impossible. Language is made by people and people are not predictable and stable. The idea of an unambiguous language is a mistake. Johnson realised, learning from John Locke, that language represents ideas. Words do not neatly match the things of the world—they try to represent the mess of the ideas in our head. “Words are the daughters of heaven,” he said, “and things are the sons of the earth.”
This is why simple statements can be so wildly disagreed about. Most brouhahas about the misuse of language are over issues that honest people can genuinely disagree about. One of the most famous examples of this is when Margaret Thatcher said there’s no such thing as society. Depending on the idea of society you hold in your mind, you will understand this very differently: there is a disagreement of ideas there that you cannot “solve” by fact checking language.
It is difficult to represent ideas in language and it always will be. The authors of the paper about LLMs want more work to “investigate the presence of systematic biases in interpretation, and explore… ambiguity-sensitive tools.” That’s good and useful, but we shouldn’t be deluded into hoping that this will solve the problem of ambiguity in political language or make political disagreements easier to solve. Our pre-existing ideas are the problem, not just our words. As Johnson said of the difficulty of defining words, “Things may be not only too little, but too much known, to be happily illustrated.”
My thanks to the generous readers who have become paid subscribers. This helps me to continue writing. Subscribers also become members of the Common Reader Book Club. And there are also occasional subscribers’ only posts like this one.
The last meeting of the Book Club was postponed because of Mothers’ Day in the USA. We are now meeting on Sunday 21st May at 19.00 UK time. Subscribers will also get access to my notes and resources about David Copperfield.
When do politicians not speak with ambiguity? They are the master of the use of words to avoid or evade questions, while stamping them with strong gestures. I agree, context is important l, as is intent and purpose. I doubt that any AI will be able to effectively fact-check, especially when disinformation is a science and daily political tool. The danger is when statements of fact are plausible. What was the saying? I believe half if what I see and nothing I here. I have always been afraid of heuristics, having been a Philip K Dick fan from an early age. I remember seeing a video of a drone on TV news. It was attacking ‘terrorists’ with a suggestion of qualification that they were a viable target because they were wearing hijab. I think fact-checking generative AI is going to be an ongoing job for humans, and we have to hope that they too don’t base truth on their own biases.
The idea that we want to replace thinking with fact checking robots is scary. Our increasing inability to think, have constructive conversation and disagreement without hate is a problem. We need more thinking. More conversations. Not more subjective rules.