Meta FAIR: Recognizing language in the brain thanks to AI
Meta recognizes entire sentences from brain activity. Instead of AGI, the FAIR team is striving for AMI – Advanced Machine Intelligence.
Yann LeCun at Meta in Paris.
(Image: heise online / emw)
Using non-invasive recordings, Meta wants to be able to recognize what people want to say. This is one of the achievements that Meta reported on at the FAIR team's anniversary celebration. FAIR stands for Fundamental AI Research and is headed by Yann LeCun, an award-winning AI expert. Fair has been based in Paris – for ten years now. This is where the first Llama model was created, as well as PyTorch – the programming library, which has since been completely handed over to the open source community.
Meta initially carried out studies together with other scientists from the Basque Center on Cognition, Brain and Language in Spain. The company would like to present two results. The first is the possibility of reading characters from brain recordings. Meta promises to have achieved 80 percent accuracy. However, this is still only 4 out of 5 correct letters, explains scientist Jean King. Just as often, entire sentences could be reconstructed from brain signals alone. The second study builds on this and aims to explain how AI can help to understand the brain signals and convert them into a sequence of words and sentences.
Initial tests with healthy volunteers
Meta refers to invasive procedures, such as Elon Musk's Neuralink. These are associated with risks, precisely because they are invasive, and they are also difficult to scale up, Meta writes in a blog post. Non-invasive methods, on the other hand, have so far become too imprecise due to the noise of the recorded signals.
For the study, the brain activity of healthy test subjects was recorded when they were asked to type sentences. An AI model was then trained with the data. Both EEG and MEG signals, i.e. magnetic and electrical, were used.
But don't get too excited too soon. Meta also says that it will be a while before this method can be used in clinical practice. On the one hand, decoding is still imperfect, and on the other, people have to be in a magnetically shielded room and must not move in order for the signals to be recorded.
Understanding how neural networks work
The second study presented concerns the neuronal mechanisms in the brain. "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations – of the meaning of a sentence – and gradually transform them into a variety of actions, such as the actual finger movement on the keyboard." The brain has a "dynamic neuronal code", which means that there is no single existing network, it changes.
Despite all the achievements of recent years and the hype surrounding large language models, Meta says that cracking the neural code of language is one of the biggest challenges facing AI and neuroscience. "Understanding the neural architecture and its computational principles is therefore an important way to develop AMI," says Meta.
AMI, Advanced Machine Intelligence, is a word that has so far been heard exclusively from Meta. AMI was coined by Meta's head of AI, Yann LeCun. He has just told the World Economic Forum in Davos that he does not believe that large language models (LLMs) as we currently know them will prevail in the long term. He believes there will be a new paradigm. Apparently, he calls it AMI.
Videos by heise
LeCun says that LLMs have too many limits when it comes to further development or even AGI. For example, there is a lack of sufficient data, the world cannot be mapped with text, so video and more would have to be incorporated in order to develop a really smart AI. We need an image of the physical world and ultimately a world model in which machines can learn the processes of the real world – including memory, intuition and logical thinking.
Meta as an open source advocate
LeCun is also an advocate of the open source idea. Together it is easier to develop something better. He also sees what Deepseek has achieved as a victory for open systems over closed ones. The Chinese company has made use of Meta's freely available AI model Llama and built something new on it, he writes on LinkedIn, for example. However, the open source community believes that Meta's AI models are also not open enough to be called open source.
Although Meta's CEO Mark Zuckerberg has said several times that he believes it is right to make AI available as freely as possible, he also says that they can only do this because they earn money with other services. He also expects an ecosystem to develop around Llama, from which Meta will in turn benefit. OpenAI, for comparison, is completely closed. Google offers both the open models Gemma and the closed models Gemini for further use.
Meta's openness also means that DINOv2, for example, can be used and further developed in areas such as medicine. DINO stands for Self-Distillation with no Labels. The model can classify and segment images, making it particularly suitable for detecting irregularities. BrightHeart is a French company that can detect heart defects in fetal hearts, for example. The latest results from this area were also presented.
Transparency note: The author was invited by Meta to the Meta FAIR anniversary celebration in Paris. Meta covered the travel costs. There were no specifications regarding the nature and scope of our reporting.
(emw)