16 Comments
User's avatar
Seth's avatar

For whatever it's worth, current 'mind reading' technology is better understood as 'neurally controlled' technology. For example, a paralyzed patient using a robotic arm via neural implant; the robot is not reading the patient's mind so much as the patient's mind has figured out how to control the robot. The robot runs some statistical models to try to meet the patient halfway, but that's as far as it goes. AFAIK pretty much all brain-machine interfaces are like this.

For this (among other) reasons, I think dystopia-level mind reading is very, if not infinitely, far off. There are just much easier ways to get a dystopia!

Expand full comment
Henry Oliver's avatar

I hope so!

Expand full comment
Seth's avatar

Perhaps more fundamentally, I don't think brain states are really "readable" in the way you seem to have in mind. A robot trying to read our mind would probably look like the Samuel Johnson reading meme (https://tinyurl.com/26h9jky9).

(guy with a neuroscience phd, fwiw, though I'm sure neuroscience phds at neuralink would disagree with me)

Expand full comment
Henry Oliver's avatar

yeah this is what I keep asking and they seem hopeful.. but obvs they would

Expand full comment
JulesLt71's avatar

There was a news-story this week about some success in reading sub-vocalisation rather than words - as in, the signals that should be going to the voice box to make sounds - and how this could lead to more expressive computer speech for those who need it.

Previous systems needed people to read and think about specific words from a vocabulary which the scanning system could then, inaccurately, recognise the brainwave when you thought of that word - while this gives the possibility of full human vocabulary from mapping just the phonemes that make up the words - phonics for AIs!

I wonder if there is a difference between internal thoughts and sub-vocalised speech? (I mean, I can definitely think in different accents)

Expand full comment
Dawn Walter's avatar

Susie Alegre, an international human rights lawyer, would argue that tech companies like Meta and Google already can read our minds. She wrote about freedom of thought in her book, Freedom To Think (published in 2022).

Expand full comment
Matthew Morgan's avatar

Lovely bit of serendipity here: about two minutes before reading this short essay, I read an even shorter poem from R. S. Thomas, and both sit well together:

"But the silence in the mind

is when we live best, within

listening distance of the silence

we call God. This is the deep

calling to deep of the psalm-

writer, the bottomless ocean

We launch the armada of

our thoughts on, never arriving."

Expand full comment
Kate Rettinger's avatar

I appreciate your thoughts and the quotations from Montaigne and Donne. I think that some aspects of AI will be helpful but I hope they will not be outweighed by the negative impact. The changes are happening so fast that it all feels out of control. I think it is disturbing. Ethical considerations appear to have been an afterthought.

Expand full comment
Nick's avatar

> I hope not. In general, I am techno-optimistic. If neuralink can reverse hearing loss, or create other such benefits, that is a sort of miracle.

I'm not. I'd take hearing loss and other suffering (including personal) if it meant we don't get a slave society controlled by fancy tech.

Expand full comment
AbigailAmpersand's avatar

“Windows into men’s souls” dot com

But they’re all probably working on some kind of ‘blocker’ implant too. Jamming the enemy’s brainchip hackers. Counter-measures, y’know. Have you read any Alastair Reynolds ‘hard sci-fi’ i.e. properly scientific speculative fiction. He reckons we’ll all have to have regular maintenance patches to fix our nano-tech biological viruses all the time. So you might have amazing cures for illnesses or weird body modifications but they will always need upgrading to and protection from malware (of course).

Expand full comment
David Rizzo's avatar

Well I certainly hope they won’t be able to read our minds. I’m trying to figure out how they would be able to do this though. They might be able to get a more sensitive or accurate lie detector but actually know the contents of our thoughts? I’m not saying it’s impossible but they’d have to be able to neurally map an accurate lexicon and grammar based on our neural activation patterns. I know functional MRI can see the neural networks that a person is using when performing a task, but for there to be a decoding of thoughts like an actual reading, that would seem very far off. It does seem terrifying. I hope this eludes our technological abilities.

Expand full comment
Buddy S.'s avatar

I do think the ‘googles’, Facebook (Meta), Amazon data base, know us very well. Every time we hit the ‘like button’ the machine is learning.

Expand full comment
Henry Oliver's avatar

not the same thing at all though

Expand full comment
Buddy S.'s avatar

Elaborate please. I fixed my typo.

Expand full comment
Henry Oliver's avatar

they cannot actually read your mind, whereas these machines might, huge difference between tracking observed behaviours and seeing your internal thoughts on a screen

Expand full comment
Buddy S.'s avatar

I understand. I guess a profile of the user can be made and predict possible outcomes but not read minds. They know enough though.

Expand full comment