#A.I.

#A.I.

#A.I.

Image credit by Pablo Ávalos Prado

Image credit by Pablo Ávalos Prado

Image credit by Pablo Ávalos Prado

Pablo Ávalos Prado

Pablo Ávalos Prado

Pablo Ávalos Prado

Neuroscientist & Medical Writer

Neuroscientist & Medical Writer

Neuroscientist & Medical Writer

May 3, 2023

May 3, 2023

May 3, 2023

Reading your mind with artificial intelligence

Reading your mind with artificial intelligence

Reading your mind with artificial intelligence

A group of researchers has developed an artificial intelligence (A.I.) language model that translates the thoughts of a group of subjects into accurate scripts from their brain activity.


There is no doubt that 2023 has become the year of artificial intelligence (A.I.). With this technology, now accessible to everyone, amateur users like you and me can achieve incredible things by just tapping on the keyboard what they wish to create: from realistic (fake) pictures of Donald Trump being arrested by the police to the scripts of a whole new movie in seconds.



In this context, current A.I. language models like OPENAI’s GPT-4 are trained on vast amounts of writing to predict the next sentence given a question or a statement. In the process, these language models create maps to indicate how words relate to another so they can even become interactive tools to “speak with” (like ChatGPT). Recently, a group of researchers from the University of Texas have gone a step further by developing a “language decoder” that allows to read brain activity from subjects and predict their thoughts with high accuracy.



Brain activity can be recorded with functional magnetic resonance imaging (fMRI), a technique that measures blood-oxygen level signals and allows to quantify with great accuracy the activity of specific parts of the brain. The participants of the study listened to 16 hours of narrative podcasts over several days while their brain activity was being scanned by fMRI. Then, the information extracted from brain areas involved in speech was compared to a generative language model (the original generative Pre-trained transformer or GPT-1) containing over 200 million words that assigned specific brain activity to pre-generated word sequences. In other words, the language decoder was trained to match brain responses to  the podcast stories with pre-existing sentences to predict imagined speech from fMRI recordings.



The results showed that the decoded word sequences captured not only the meaning of the words but often even exact words and phrases, demonstrating that high-accurate semantic information could be recovered from the fMRI signals. As a matter of fact, after the training of the language decoder, the participants were asked to imagine telling a short story during a fMRI scan and afterwards, to repeat the same story aloud. Across the stories, the analysis revealed that the decoder could recover the meaning of imagined stimuli. Here there is an example of the original transcript of one of the subjects and what the decoder predicted after analyzing the brain signals:



Unspoken story: ”coming down a hill at me on a skateboard and he was going really fast and he stopped just in time”.


Predicted story: “he couldn't get to me fast enough he drove straight up into my lane and tried to ram me”.



In addition, the language decoder succeeded in reconstructing language descriptions from brain responses to non-linguistic tasks: when the participants were asked to watch short movies with no sound while being recorded with fMRI, the decoder transformed the recorded brain responses into very reliable scripts that were in line with the scenes’ plot. 



By now this is a great improvement compared to current language decoders, which are not as accurate and require the invasive implantation of devices. However, fMRI is expensive and the researchers failed when they tried to use a decoder trained on one person to read the brain activity of another because every brain has unique ways of representing meanings. The authors of this work hope that advances of this kind will help people who lost their ability to speak to communicate in the future through similar brain-interface decoders. They also raise awareness of the risk of brain decoding technology for malicious purposes. Quoting the famous A.I. user Stephen Hawking: “The rise of powerful A.I. will be either the best or the worst thing ever to happen to humanity”.


Original article
:

Tang J, LeBel A, Jain S, Huth AG. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci. May 01 2023;doi:10.1038/s41593-023-01304-9

A group of researchers has developed an artificial intelligence (A.I.) language model that translates the thoughts of a group of subjects into accurate scripts from their brain activity.


There is no doubt that 2023 has become the year of artificial intelligence (A.I.). With this technology, now accessible to everyone, amateur users like you and me can achieve incredible things by just tapping on the keyboard what they wish to create: from realistic (fake) pictures of Donald Trump being arrested by the police to the scripts of a whole new movie in seconds.



In this context, current A.I. language models like OPENAI’s GPT-4 are trained on vast amounts of writing to predict the next sentence given a question or a statement. In the process, these language models create maps to indicate how words relate to another so they can even become interactive tools to “speak with” (like ChatGPT). Recently, a group of researchers from the University of Texas have gone a step further by developing a “language decoder” that allows to read brain activity from subjects and predict their thoughts with high accuracy.



Brain activity can be recorded with functional magnetic resonance imaging (fMRI), a technique that measures blood-oxygen level signals and allows to quantify with great accuracy the activity of specific parts of the brain. The participants of the study listened to 16 hours of narrative podcasts over several days while their brain activity was being scanned by fMRI. Then, the information extracted from brain areas involved in speech was compared to a generative language model (the original generative Pre-trained transformer or GPT-1) containing over 200 million words that assigned specific brain activity to pre-generated word sequences. In other words, the language decoder was trained to match brain responses to  the podcast stories with pre-existing sentences to predict imagined speech from fMRI recordings.



The results showed that the decoded word sequences captured not only the meaning of the words but often even exact words and phrases, demonstrating that high-accurate semantic information could be recovered from the fMRI signals. As a matter of fact, after the training of the language decoder, the participants were asked to imagine telling a short story during a fMRI scan and afterwards, to repeat the same story aloud. Across the stories, the analysis revealed that the decoder could recover the meaning of imagined stimuli. Here there is an example of the original transcript of one of the subjects and what the decoder predicted after analyzing the brain signals:



Unspoken story: ”coming down a hill at me on a skateboard and he was going really fast and he stopped just in time”.


Predicted story: “he couldn't get to me fast enough he drove straight up into my lane and tried to ram me”.



In addition, the language decoder succeeded in reconstructing language descriptions from brain responses to non-linguistic tasks: when the participants were asked to watch short movies with no sound while being recorded with fMRI, the decoder transformed the recorded brain responses into very reliable scripts that were in line with the scenes’ plot. 



By now this is a great improvement compared to current language decoders, which are not as accurate and require the invasive implantation of devices. However, fMRI is expensive and the researchers failed when they tried to use a decoder trained on one person to read the brain activity of another because every brain has unique ways of representing meanings. The authors of this work hope that advances of this kind will help people who lost their ability to speak to communicate in the future through similar brain-interface decoders. They also raise awareness of the risk of brain decoding technology for malicious purposes. Quoting the famous A.I. user Stephen Hawking: “The rise of powerful A.I. will be either the best or the worst thing ever to happen to humanity”.


Original article
:

Tang J, LeBel A, Jain S, Huth AG. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci. May 01 2023;doi:10.1038/s41593-023-01304-9

A group of researchers has developed an artificial intelligence (A.I.) language model that translates the thoughts of a group of subjects into accurate scripts from their brain activity.


There is no doubt that 2023 has become the year of artificial intelligence (A.I.). With this technology, now accessible to everyone, amateur users like you and me can achieve incredible things by just tapping on the keyboard what they wish to create: from realistic (fake) pictures of Donald Trump being arrested by the police to the scripts of a whole new movie in seconds.



In this context, current A.I. language models like OPENAI’s GPT-4 are trained on vast amounts of writing to predict the next sentence given a question or a statement. In the process, these language models create maps to indicate how words relate to another so they can even become interactive tools to “speak with” (like ChatGPT). Recently, a group of researchers from the University of Texas have gone a step further by developing a “language decoder” that allows to read brain activity from subjects and predict their thoughts with high accuracy.



Brain activity can be recorded with functional magnetic resonance imaging (fMRI), a technique that measures blood-oxygen level signals and allows to quantify with great accuracy the activity of specific parts of the brain. The participants of the study listened to 16 hours of narrative podcasts over several days while their brain activity was being scanned by fMRI. Then, the information extracted from brain areas involved in speech was compared to a generative language model (the original generative Pre-trained transformer or GPT-1) containing over 200 million words that assigned specific brain activity to pre-generated word sequences. In other words, the language decoder was trained to match brain responses to  the podcast stories with pre-existing sentences to predict imagined speech from fMRI recordings.



The results showed that the decoded word sequences captured not only the meaning of the words but often even exact words and phrases, demonstrating that high-accurate semantic information could be recovered from the fMRI signals. As a matter of fact, after the training of the language decoder, the participants were asked to imagine telling a short story during a fMRI scan and afterwards, to repeat the same story aloud. Across the stories, the analysis revealed that the decoder could recover the meaning of imagined stimuli. Here there is an example of the original transcript of one of the subjects and what the decoder predicted after analyzing the brain signals:



Unspoken story: ”coming down a hill at me on a skateboard and he was going really fast and he stopped just in time”.


Predicted story: “he couldn't get to me fast enough he drove straight up into my lane and tried to ram me”.



In addition, the language decoder succeeded in reconstructing language descriptions from brain responses to non-linguistic tasks: when the participants were asked to watch short movies with no sound while being recorded with fMRI, the decoder transformed the recorded brain responses into very reliable scripts that were in line with the scenes’ plot. 



By now this is a great improvement compared to current language decoders, which are not as accurate and require the invasive implantation of devices. However, fMRI is expensive and the researchers failed when they tried to use a decoder trained on one person to read the brain activity of another because every brain has unique ways of representing meanings. The authors of this work hope that advances of this kind will help people who lost their ability to speak to communicate in the future through similar brain-interface decoders. They also raise awareness of the risk of brain decoding technology for malicious purposes. Quoting the famous A.I. user Stephen Hawking: “The rise of powerful A.I. will be either the best or the worst thing ever to happen to humanity”.


Original article
:

Tang J, LeBel A, Jain S, Huth AG. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci. May 01 2023;doi:10.1038/s41593-023-01304-9