Stroke: Brain-computer interface to enable speech in real time
A brain-computer interface can translate a stroke patient's brain activity into speech with minimal delay. However, the error rate is high.

(Image: Ole.CNX/Shutterstock.com)
Brain-computer interfaces (BCIs) are intended to help people with speech disorders, for example after a stroke, to regain their "voice". Researchers at the University of California in Berkeley say they have made significant progress in the development of such technologies. This should make it possible to output the brain activity of a stroke patient in the area for speech movements in near real time in both speech and text.
Certain BCIs are already capable of reading electrical signals for speech movements from the cerebral cortex and converting them into text or computer-generated speech. However, there are significant delays between the mental formulation of the sentences and the actual output by the device. This is because patients first have to formulate a complete sentence in their brain before the conversion can take place.
According to a report published on Monday in Nature Neuroscience, the research team from Berkeley has largely overcome this hurdle. In contrast to older BCIs, the new device processes brain activity continuously, i.e. while it is still being formulated. The transformation into text and speech takes place immediately, according to the report. This represents a further step towards a clinically applicable technology, which the scientists are currently testing on three people. However, they are initially only presenting the results of a study patient who lost her ability to speak following a stroke. She was implanted with a device with 253 electrode channels over the motor speech center, which collects data on its activity: an electrocorticogram.
To train the computer model, the researchers initially collected data via this brain wave image while the patient silently formulated predefined sentences with a vocabulary of 1024 words. For the later evaluation of the BCI, the test person then also mentally formulated predetermined sentences with the same vocabulary, which the trained model was to output in speech. The device produced an output for the known sentences with a delay of just over one second, whereas previously 23 seconds was the rule.
Implantation remains the greatest risk
Despite these improvements in latency, the authors report high error rates of 23.9 percent for text and 45.3 percent for speech output. Further progress is therefore required for actual clinical application.
The study documents "a very fast and fluent decoding of speech from brain activity with an average speed of 47 words per minute", says Surjo Soekadar, neurotechnologist at Charité Berlin, the Science Media Center (SMC). Continuous online implementation is crucial. Above all, it is a "technical feasibility demonstration". The greatest obstacle is the implantation itself, which entails risks such as bleeding or infections. It also remains unclear whether the clinical benefits can be justified compared to non-invasive methods.
The team has tested the "streaming" of speech, explains Soekadar's colleague from TU Munich, Simon Jacob. They used machine learning algorithms ("AI") that had been trained with electrical signals from the cerebral cortex for speech movements. What is new is that the decoding algorithms "work eight times faster than before". However, many stroke patients not only have a speech disorder, but also a language disorder. Neuroprostheses of the type presented are therefore only likely to be an option for a small number of patients. According to legal experts, brain implants generally pose major ethical and legal problems.
(wpl)