This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://www.scientificamerican.com/article/ai-decodes-visual-brain-activity-and-writes-captions-for-it/
and if you wish to take away this text from our web site please contact us
November 6, 2025
3 min learn
AI Decodes Visual Brain Activity—and Writes Captions for It
A non-invasive imaging approach can translate scenes in your head into sentences. It might assist to disclose how the mind interprets the world
Functional magnetic resonance imaging is a non-invasive method to discover mind exercise.
PBH Images/Alamy Stock Photo
Reading an individual’s thoughts utilizing a recording of their mind exercise sounds futuristic, but it surely’s now one step nearer to actuality. A brand new approach known as ‘mind captioning’ generates descriptive sentences of what an individual is seeing or picturing of their thoughts utilizing a read-out of their mind exercise, with spectacular accuracy.
The approach, described in a paper printed at this time in Science Advances, additionally presents clues for a way the mind represents the world earlier than ideas are put into phrases. And it’d be capable to help people with language difficulties, resembling these attributable to strokes, to raised talk.
The mannequin predicts what an individual is “with a lot of detail”, says Alex Huth, a computational neuroscientist on the University of California, Berkeley. “This is hard to do. It’s surprising you can get that much detail.”
If you are having fun with this text, take into account supporting our award-winning journalism by subscribing. By buying a subscription you’re serving to to make sure the way forward for impactful tales concerning the discoveries and concepts shaping our world at this time.
Researchers have been in a position to precisely predict what an individual is seeing or listening to using their brain activity for greater than a decade. But decoding the mind’s interpretation of complicated content material, resembling brief movies or summary shapes, has proved to be tougher.
Previous makes an attempt have recognized solely key phrases that describe what an individual noticed slightly than the whole context, which could embody the topic of a video and actions that happen in it, says Tomoyasu Horikawa, a computational neuroscientist at NTT Communication Science Laboratories in Kanagawa, Japan. Other makes an attempt have used synthetic intelligence (AI) fashions that may create sentence construction themselves, making it tough to know whether or not the outline was truly represented within the mind, he provides.
Horikawa’s methodology first used a deep-language AI mannequin to analyse the textual content captions of greater than 2,000 movies, turning every one into a singular numerical ‘meaning signature’. A separate AI instrument was then educated on six members’ mind scans and learnt to seek out the brain-activity patterns that matched every which means signature whereas the members watched the movies.
Once educated, this mind decoder might learn a brand new mind scan from an individual watching a video and predict the which means signature. Then, a unique AI textual content generator would seek for a sentence that comes closest to the which means signature decoded from the person’s mind.
For instance, a participant watched a brief video of an individual leaping from the highest of a waterfall. Using their mind exercise, the AI mannequin guessed strings of phrases, beginning with “spring flow”, progressing to “above rapid falling water fall” on the tenth guess and arriving at “a person jumps over a deep water fall on a mountain ridge” on the a centesimal guess.
The researchers additionally requested members to recall video clips that that they had seen. The AI fashions efficiently generated descriptions of those recollections, demonstrating that the mind appears to make use of the same illustration for each viewing and remembering.
This approach, which makes use of non-invasive purposeful magnetic resonance imaging, might assist to enhance the method by which implanted brain–computer interfaces may translate folks’s non-verbal psychological representations straight into textual content. “If we can do that using these artificial systems, maybe we can help out these people with communication difficulties,” says Huth, who developed the same mannequin in 2023 together with his colleagues that decodes language from non-invasive mind recordings.
These findings elevate concerns about mental privacy, Huth says, as researchers develop nearer to revealing intimate ideas, feelings and well being situations that might, in principle, be used for surveillance, manipulation or to discriminate in opposition to folks. Neither Huth’s mannequin nor Horikawa’s crosses a line, they each say, as a result of these strategies require members’ consent and the fashions can not discern non-public ideas. “Nobody has shown you can do that, yet,” says Huth.
This article is reproduced with permission and was first published on November 5, 2025.
If you loved this text, I’d prefer to ask on your help. Scientific American has served as an advocate for science and trade for 180 years, and proper now would be the most crucial second in that two-century historical past.
I’ve been a Scientific American subscriber since I used to be 12 years previous, and it helped form the way in which I take a look at the world. SciAm at all times educates and delights me, and evokes a way of awe for our huge, stunning universe. I hope it does that for you, too.
If you subscribe to Scientific American, you assist be sure that our protection is centered on significant analysis and discovery; that we’ve the assets to report on the choices that threaten labs throughout the U.S.; and that we help each budding and dealing scientists at a time when the worth of science itself too usually goes unrecognized.
In return, you get important information, charming podcasts, good infographics, can’t-miss newsletters, must-watch movies, difficult video games, and the science world’s finest writing and reporting. You may even present somebody a subscription.
There has by no means been a extra vital time for us to face up and present why science issues. I hope you’ll help us in that mission.
This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://www.scientificamerican.com/article/ai-decodes-visual-brain-activity-and-writes-captions-for-it/
and if you wish to take away this text from our web site please contact us
This web page was created programmatically, to learn the article in its authentic location you…
This web page was created programmatically, to learn the article in its unique location you…
This web page was created programmatically, to learn the article in its unique location you…
This web page was created programmatically, to learn the article in its authentic location you…
This web page was created programmatically, to learn the article in its unique location you…
This web page was created programmatically, to learn the article in its authentic location you'll…