@Jonn
Do you know if it works without the lip movements?
Lip reading is certainly possible, so AI should be able to do it sooner or later.
And presumably the gross electrical signals involved in speaking can be picked up and interpreted.
Actual mind reading "feels" like it would be much more difficult.
Getting words from electrical signals isn't necessarily impossible, and it may be simpler than it sounds.
But the brain does a lot more than just speech. And it's more or less "always on".
You'd expect a significant "signal to noise" problem getting words directly from the brain.
BTW - OFC it's very interesting that they're working on this, and even if it doesn't deliver "real" mind reading, I can see it being useful.
But e.g. if one positive outcome would be to help deaf people, and the AI is "interpreting speech" from the signals controlling the muscles controlling the vocal system, it might be simpler and more convenient to use optical inputs (i.e. actual AI-supported
lip-reading).
Also real mind reading would be extremely easy to misuse. And there are some unpleasant possible scenarios. I can see North Korea being an early adopter.
I've been hoping that mind reading would be too difficult since modern AI was quite new (the early one that was good at playing Go/Weichi/Babuk) and I saw a summer entertainment thing for kids (and adults if they wanted) where you could turn on and run a toy train with (IIRC) Alpha waves
( i.e. the gear for simple measurement of the electrical activity in a human brain had become cheap, compact, and comfortable).
Nothing like mind reading OFC, but it connected the two developing technologies.
Anyway, for deaf people, it would be way better to put gear on the deaf people first.
Next step would be e.g. ASL interpretation for people who can't use it, but I'd expect that to be possible with a mobile phone as sensor and (sooner or later) AI runtime platform.