Meta might've done something useful, pioneering an AI model that can interpret brain activity into sentences with 80% accuracy

CHONGQING, CHINA - OCTOBER 30: In this photo illustration - The Facebook app page is displayed on a smartphone in the Apple App Store in front of the Meta Platforms, inc. logo on October 30, 2024 in Chongqing, China. (Photo by Cheng Xin/Getty Images)
(Image credit: Getty Images / Cheng Xin)

Depending on what areas of the internet you frequent, perhaps you were under the illusion that thoughts-to-text technology already existed; we all have that one mutual or online friend that we gently hope will perhaps one day post slightly less. Well, recently Meta has announced that a number of their research projects are coming together to form something that might even improve real people's lives—one day. Maybe!

Way back in 2017, Meta (at that time just called 'Facebook') talked a big game about “typing by brain.” Fast forward to now and Meta has shared news of two breakthroughs that make those earlier claims seem more substantial than a big sci-fi thought bubble (via MIT Technology Review). Firstly, Meta announced research that has created an AI model which "successfully decodes the production of sentences from non-invasive brain recordings, accurately decoding up to 80% of characters, and thus often reconstructing full sentences solely from brain signals."

The second study Meta shared then examines how AI can facilitate a better understanding of how our brains slot the Lego bricks of language into place. For people who have lost the ability to speak after traumatic brain injuries, or who otherwise have complex communication needs, all of this scientific research could be genuinely life-changing. Unfortunately, this is where I burst the bubble: the 'non-invasive' device Meta used to record brain signals so that they could be decoded into text is huge, costs $2 million, and makes you look a bit like Megamind.

Dated reference to an animated superhero flick for children aside, Meta has been all about brain-computer interfaces for years. More recently they've even demonstrated a welcome amount of caution when it comes to the intersection of hard and 'wet' ware.

This time, the Meta Fundamental Artificial Intelligence Research (FAIR) lab collaborated with the Basque Center on Cognition, Brain and Language, to record the brain signals of 35 healthy volunteers as they typed. Those brain signals were recorded using the aforementioned, hefty headgear—specifically a MEG scanner—and then interpreted by a purposefully trained deep neural network.

Meta wrote, "On new sentences, our AI model decodes up to 80% of the characters typed by the participants recorded with MEG, at least twice better than what can be obtained with the classic EEG system."

This essentially means that recording the magnetic fields produced by the electrical currents within the participants' brains resulted in data the AI could more accurately interpret, compared to just recording the electrical activity itself via an EEG. However, by Meta's own admission, this does not leave the research in the most practical of places.

For one, MEG scanners are far from helmets you can just pop on and off—it's specialised equipment that requires patients to sit still in a shielded room. Besides that, this study used a comparatively tiny sample size of participants, none of whom had a known traumatic brain injury or speech difficulties. This means that it's yet to be seen just how well Meta's AI model can interpret for those who really need it.

Still, as a drop out linguist myself, I'm intrigued by Meta's findings when it comes to how we string sentences together in the first place. Meta begins by explaining, "Studying the brain during speech has always proved extremely challenging for neuroscience, in part because of a simple technical problem: moving the mouth and tongue heavily corrupts neuroimaging signals." In light of this practical reality, typing instead of speaking is kind of genius.

So, what did Meta find? It's exactly like I said before: Linguistic Lego bricks, baby. Okay, that's an oversimplification, so I'll quote Meta directly once more: "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations—the meaning of a sentence—and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard [...] Our results show that the brain uses a ‘dynamic neural code’—a special neural mechanism that chains successive representations while maintaining each of them over long time periods."

To put it another way, your brain starts with vibes, unearths meaning, daisy chains those Lego bricks together, then transforms the thought into the action of typing…yeah, I would love to see the AI try to interpret the magnetic fields that led to that sentence too.

Best gaming PCBest gaming laptop


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Jess Kinghorn
Hardware Writer

Jess has been writing about games for over ten years, spending the last seven working on print publications PLAY and Official PlayStation Magazine. When she’s not writing about all things hardware here, she’s getting cosy with a horror classic, ranting about a cult hit to a captive audience, or tinkering with some tabletop nonsense.

Read more
Neuralink
In 2024 Elon Musk predicted that 'hundreds of millions' of people will have his brain chips within the next 20 years, so don't forget to hold him to it
The OpenAI logo is being displayed on a smartphone with an AI brain visible in the background, in this photo illustration taken in Brussels, Belgium, on January 2, 2024. (Photo illustration by Jonathan Raa/NurPhoto via Getty Images)
OpenAI is working on a new AI model Sam Altman says is ‘good at creative writing’ but to me it reads like a 15-year-old's journal
SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)
In a mere decade 'everyone on Earth will be capable of accomplishing more than the most impactful person can today' says OpenAI boss Sam Altman
A digitally generated image of abstract AI chat speech bubbles overlaying a blue digital surface.
We need a better name for AI, or we risk talking past each other until actually intelligent AGI comes home mooing
The Cortical Labs CL1 biological computer
The world's first 'body in a box' biological computer costs $35,000 and looks both cool as hell plus creepy as heck
A young man runs on the spot in a studio. The set contains a yellow post box, two yellow road signs, a set of traffic lights, and a bicycle haphazardly stren around. The man is wearing the Halliday smart glasses and a green text overlay attempts to artistically demonstrate how the man is using the smart glasses to give him directions while he runs.
The 'um, actually' guy you know just got infinitely more powerful with these AI glasses
Latest in AI
BURBANK, CALIFORNIA - AUGUST 15: Protestors attend the SAG-AFTRA Video Game Strike Picket on August 15, 2024 in Burbank, California. (Photo by Lila Seeley/Getty Images)
8 months into their strike, videogame voice actors say the industry's latest proposal is 'filled with alarming loopholes that will leave our members vulnerable to AI abuse'
live action Jimbo the Jester from Balatro holding a playing card and addressing the camera
LocalThunk forbids AI-generated art on the Balatro subreddit: 'I think it does real harm to artists of all kinds'
Aloy
'Creepy,' 'ghastly,' 'rancid': Viewers react to leaked video of Sony's AI-powered Aloy
Seattle, USA - Jul 24, 2022: The South Lake Union Google Headquarter entrance at sunset.
Google is rolling out an even more AI-heavy search engine mode because 'power users want AI responses for even more of their searches'
A digitally generated image of abstract AI chat speech bubbles overlaying a blue digital surface.
We need a better name for AI, or we risk talking past each other until actually intelligent AGI comes home mooing
MOUNTAIN VIEW, CALIFORNIA - AUGUST 22: A view of Google Headquarters in Mountain View, California, United States on August 22, 2024.
One educational company accuses Google's AI summary of leading to a 'hollowed-out information ecosystem of little use and unworthy of trust' in latest lawsuit
Latest in News
Erenshor - A player and two simulated MMO party members stand on a plateau in front of a yellow landscape
This RuneScape-looking 'simulated MMORPG' has all the nostalgia without the drama because all the other 'players' are NPCs
Pirate Bay co-founder Carl Lundstrom
Pirate Bay co-founder and far-right politician found dead after plane crash
Sunset in the desert in Hello Sunshine
Hello Sunshine is a desert survival sandbox where you live in the literal shadow of the colossus
Roblox CEO David Baszucki.
'Don't let your kids be on Roblox', Roblox CEO tells parents, before comparing himself to Walt Disney and declaring the platform 'the future of communication'
Titus in Warhammer 40,000: Space Marine 3 reveal promo image
Praise be to the Omnissiah! Warhammer 40,000: Space Marine 3 is officially in development
Jensen Huang, co-founder and chief executive officer of Nvidia Corp., speaks while holding the company's new GeForce RTX 50 series graphics cards and a Thor Blackwell robotics processor during the 2025 CES event in Las Vegas, Nevada, US, on Monday, Jan. 6, 2025. Huang announced a raft of new chips, software and services, aiming to stay at the forefront of artificial intelligence computing. Photographer: Bridget Bennett/Bloomberg via Getty Images
Group allegedly trying to smuggle Nvidia Blackwell chips stare down bail set at over $1 million