
Brain implant turns thoughts into speech in near real-time

A brain implant using artificial intelligence was able to turn a paralysed woman's thoughts into speech almost simultaneously, US researchers said Monday.
Though still at the experimental stage, the latest achievement using an implant linking brains and computers raised hopes that these devices could allow people who have lost the ability to communicate to regain their voice.
The California-based team of researchers had previously used a brain-computer interface (BCI) to decode the thoughts of Ann, a 47-year-old with quadriplegia, and translate them into speech.
However there was an eight-second delay between her thoughts and the speech being read aloud by a computer.
This meant a flowing conversation was still out of reach for Ann, a former high school math teacher who has not been able to speak since suffering a stroke 18 years ago.
But the team's new model, revealed in the journal Nature Neuroscience, turned Ann's thoughts into a version of her old speaking voice in 80-millisecond increments.
"Our new streaming approach converts her brain signals to her customised voice in real time, within a second of her intent to speak," senior study author Gopala Anumanchipalli of the University of California, Berkeley told AFP.
Ann's eventual goal is to become a university counsellor, he added.
"While we are still far from enabling that for Ann, this milestone takes us closer to drastically improving the quality of life of individuals with vocal paralysis."
- 'Excited to hear her voice' -
For the research, Ann was shown sentences on a screen -- such as "You love me then" -- which she would say to herself in her mind.
Then her thoughts would be converted into her voice, which the researchers built up from recordings of her speaking before she was injured.
Ann was "very excited to hear her voice, and reported a sense of embodiment," Anumanchipalli said.
The BCI intercepts brain signals "after we've decided what to say, after we've decided what words to use and how to move our vocal tract muscles," study co-author Cheol Jun Cho explained in a statement.
The model uses an artificial intelligence method called deep learning that was trained on Ann previously attempting to silently speak thousands of sentences.
It was not always accurate -- and still has a limited vocabulary of 1,024 words.
Patrick Degenaar, a neuroprosthetics professor at the UK's Newcastle University not involved in the study, told AFP that this is "very early proof of principle" research.
But it is still "very cool", he added.
Degenaar pointed out that this system uses an array of electrodes that do not penetrate the brain, unlike the BCI used by billionaire Elon Musk's Neuralink firm.
The surgery for installing these arrays is relatively common in hospitals for diagnosing epilepsy, which means this technology would be easier to roll out en masse, he added.
With proper funding, Anumanchipalli estimated the technology could be helping people communicate in five to 10 years.
(O.Garcia--TAG)