Neuroprosthesis: ALS patient can communicate faster

With the help of a neuroprosthesis, an ALS patient can communicate much better again.

listen Print view
ALS patient uses his neuro-speech prosthesis and eye-tracking to control a speech computer.

ALS patient uses his neuro-speech prosthesis and eye-tracking to control a speech computer.

(Image: UC Davis Health)

4 min. read

Thanks to a neuro-speech prosthesis, a 45-year-old patient suffering from amyotrophic lateral sclerosis (ALS) can communicate more fluently again. Before the neuroprosthesis was fitted, he achieved a speed of around 6 words per minute by using a wireless, gyroscopic head mouse (Quha Zono 2) to type words on a screen using sensors. Now it's six times more. This is the result of a study published by US researchers in the New England Journal of Medicine.

According to the study, the patient received a brain-computer interface (BCI) in July 2023. The brain-computer interface developed by neurosurgeons at the University of California in Davis records, among other things, the EEG signals from areas of Broca's area involved in speech production, more precisely the inferior frontal gyrus. For example, the signals from the left precentral gyrus, the part of the cerebral cortex that connects motor neurons and is responsible for movement, are also recorded. The signals are recorded by 4 arrays with 64 electrodes each and transmitted wirelessly to a computer.

The electrical activity is measured and processed using four arrays with 64 electrodes each. Among other things, an artificial neural network is used to decode the recorded neural signals into an English phoneme every 80 ms.
Electrodes were placed on the brain. Before the operation, an MRI and a database from the Human Connectome Project were used to determine where the electrodes should be placed on the participant's brain.

(Image: Brandman et al.)

The brain activity is converted into phoneme probabilities using a neural network. The most probable utterance wins first and is output. The sounds are played back in real time via a loudspeaker using text-to-speech software. The patient's voice was also imitated in this case. In this way, the patient should be able to speak almost fluently with short delays between words.

The neuroprosthesis uses brain activity to recognize when the participant is trying to speak. After 6 seconds of inactivity or by means of eye tracking, the conversation ends. The patient can select further menu items, for example to indicate whether the sentence is correct.

(Image: Brandman et al.)

The patient then has the opportunity to correct the statement using eye-tracking.

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmung wird hier ein externes YouTube-Video (Google Ireland Limited) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (Google Ireland Limited) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

According to the researchers, the patient's learning curve was surprisingly steep. On the very first day of the exercise – 25 days after the operation – the patient achieved 99.6 percent accuracy with a vocabulary of 50 words using the neuroprosthesis, and 90.2 percent accuracy with a vocabulary of 125,000 words. With further training data over a period of 8.4 months after the operation, the neuroprosthesis achieved an accuracy of 97.5 percent. In self-directed conversations, the patient achieved a rate of around 32 words per minute. On average, English speakers achieve a rate of 160 words per minute, according to the researchers.

The transcript shows the first eight sentences produced by the patient (T15) when using the neuroprosthesis for personal communication.
T15 is the patient.

(Image: Brandman et al.)

According to the researchers, the neuroprosthesis can significantly improve communication for people with paralysis by decoding the cognitive processes that occur when trying to speak and converting them into text. This happened not only at the word level, but also at the phoneme level.

For decades, scientists have been working on being able to read and translate brain activity in various areas in as much detail as possible. As early as 2021, researchers at the Cognitive Systems Lab (CSL) at the University of Bremen developed a neuro-speech prosthesis that converts thought words into audible speech. In a study "Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity" published in the journal Nature, the researchers showed that the device can convert the brainwave signals of a person who is only imagining speaking into speech without any perceptible delay.

To demonstrate how the device works, the scientists implanted electrodes in the head of an epilepsy patient. She read texts aloud, from which the system learned the relationship between speech and neuronal activity using machine learning. The same process was repeated with whispered and imagined speech, which led to the same result. This leads the researchers to conclude that the brain processes audible, whispered and imagined speech in a similar way. The neuroprosthesis was developed as part of a collaboration in the research program "Multilateral Cooperation in Computational Neuroscience", funded by the German Federal Ministry of Education and Research (BMBF) and the US National Science Foundation.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.