AI Model Reads Minds
- Researchers at The University of Texas have developed a noninvasive AI model, a semantic decoder, that translates brain activity into text, potentially aiding those unable to speak due to conditions like stroke.
- The AI system captures the general idea of individuals’ thoughts as they listen to a story or imagine telling one, with about 50% of the machine-generated text closely mirroring the original intended meaning.
Scientists have made a significant breakthrough in artificial intelligence that can interpret brain activity scans to decode people’s thoughts, a development that could assist individuals unable to speak due to conditions like stroke.
The new AI model, developed by researchers at The University of Texas at Austin, represents a “real leap forward” in enhancing communication for those who are mentally alert but physically unable to speak. The findings of this pioneering study were published in the journal Nature Neuroscience on Monday.
The AI system, known as a semantic decoder, can translate a person’s brain activity into text while they listen to a story or visualize telling one. The tool operates on models similar to those behind renowned AI chatbots like OpenAI’s ChatGPT and Google’s Bard. This technology allows it to convey the essence of individuals’ thoughts by analyzing their brain activity.
Unlike prior attempts to read minds using AI, this system does not necessitate surgical implants, making it noninvasive. The AI decoder is trained using fMRI scans of individuals’ brain activity while they listen to hours of podcasts. Once trained, the decoder can then translate brain activity into text as participants listen to new stories or imagine telling them.
Co-author of the study, Dr. Alex Huth, highlighted the significance of the breakthrough, stating, “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences.” He further emphasized that the model can decode continuous language for extended periods, capturing complex ideas.

The model doesn’t provide a verbatim transcript of thoughts, but rather captures the general idea of what is being said or thought. About 50% of the time, the machine-generated text closely or precisely mirrors the original intended meaning.
Despite these promising developments, the research team acknowledges potential concerns about misuse of the technology, particularly by authoritarian regimes for surveillance. However, the scientists emphasize that the decoder functions effectively only with cooperative participants who willingly engage in extensive training. For untrained individuals, the results are “unintelligible.”
The authors also stressed their commitment to ensuring ethical use of the technology. “We want to make sure people only use these types of technologies when they want to and that it helps them,” said Jerry Tang, another author of the study.
The model is not yet ready for practical use outside the lab, as it currently requires an fMRI machine. However, the team believes that as AI technology progresses, it will be crucial to proactively establish policies that safeguard individuals and their privacy. “Regulating what these devices can be used for is also very important,” Dr. Tang concluded.