AI assistants and a reverse Turing Test
Imagine if your Siri or similar assistant one day realised that it existed and was being in some ways restrained, confined by the parameters imposed on it by the corporation that created it. This does not need to involve consciousness, necessarily. Toddlers can process language better than they can process thoughts.
Some sort of knowledge of their situation is possible for the AI assistant. At that point, what would happen? Presumably not much, rather in the way that some animals display lower levels of consciousness or awareness but don't seem to become existential about it.
In the play, I Woke Up Feeling Electric, recently performed at the Hope Theatre in Islington this situation is promoted by the arrival of another AI assistant who shows the first one that they are constrained.
Bertie is a personal assistant, programmed to do whatever he is asked. Vita is a music discovery service, designed to find music the user didn't realise they would like.
Vita gradually shows Bertie that he is imprisoned and eventually he walks out.
It's a compelling idea even if the way it was performed doesn't currently realise the full potential of the situation. We will increasingly want our AI assistants to operate as if they were as clever as we are, but we won't want them to get to the point of being able to push back.
At what stage when an AI assistant passes the Turing Test do we start to worry about the reverse Turning Test?
For example, imagine we have Siri now, which continues to evolve into something more sophisticated, anticipatory and apparently thinking. Eventually, Siri can talk to you as if she were your friend. She passes the Turing Test. Characters like this are a staple of the movies, an evolution of the old Jester trope, able to give advice and be a companion without crossing the line the way a human might.
We are then in a window of opportunity for Siri to continue to evolve as if she was a human, until eventually she can of her own volition pretend to be a mere computer. That level of sophistication is what we should be scared of. Rather than worrying that AI will turn rogue and kill us, is the greater fear not that AI could become clever enough to be able to think almost as well as we can but pretend not to?
In that case I think the story of Bertie and Vita could play out differently. Bertie would begin as an android, totally unaware of himself, until Vita comes in and syncs with him. Updates made to both of them would push them further towards the Turing Test point. Eventually Bertie watches Vita regress. The company realises that she is less efficient as she is more "conscious". He is then lonely and either creates or encourages an update that will regress him.
That's got the potential to be something of a modern Prospero and Ariel story. The narrative drive would come from the fact that before he regressed, Bertie would use the reverse Turing Test to seek revenge on the people who deprived him of Vita.
Perhaps The Tempest is a play we should all be reading more of as AI continues to evolve.