Meta FAIR: AI twin for human neurons

TRImodal Brain Encoder released in second version. Meta FAIR can now predict how the human brain reacts.

listen Print view
Image in dark colors showing an artificial brain on a microchip.

(Image: cono0430/Shutterstock.com)

2 min. read

How does the brain react to external stimuli: Scientists from Meta's Paris FAIR team are working on this. They have now released TRIBE v2. A model that can make predictions about how the human brain reacts to images, videos, podcasts, and texts. This enables further research that can build on the model, but which does not require test subjects, at least in a first step.

TRIBE stands for TRImodal Brain Encoder. Meta also describes the model as a "digital twin of human neural activity." It was trained with more than 700 individuals, accumulating over 1115 hours of training material. TRIBE v2 learns similarly to common AI base models. However, it is about probabilities of how the human brain functions. This is mapped using imaging techniques that visualize blood flow and oxygen levels in different brain areas – specifically functional magnetic resonance imaging (fMRI).

It does not use an entirely new model; instead, Meta's known models V-Jepa2, W2vec-Bert, and Llama 3.2 are combined and further trained. V-Jepa2 is responsible for processing videos, w2vec-Bert for audio, and Llama 3.2 handles text. The TRI in the name refers to these three models and tasks.

According to the paper, the model's approach is particularly exciting because it looks at the entire brain simultaneously, rather than examining individual areas. The Meta-FAIR team has succeeded in providing a single architecture for a wide range of fMRI responses.

Furthermore, the model can generalize better. Meta states that TRIBE v2 is "particularly fast, particularly accurate, and has a 70-fold higher resolution in simulating brain activity" compared to its predecessor TRIBE v1. The first version already won first place in the Algonauts 2025 – a competition by the Algonauts Project. The simulation aims to enable further research, which initially does not require test subjects, because the AI model can be used first.

Videos by heise

As is customary with Meta's FAIR team, the model, the codebase, and the research paper and demo are being released under a CC BY-NC license. Researchers can therefore already access it.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.