This AI Can Translate Your Thoughts Into Text

April 20, 2020 - 7 minutes read

Reading minds sounds like a concept from a sci-fi novel, but it’s closer than ever before to becoming a reality. Researchers have developed artificial intelligence (AI) that can turn a person’s thoughts into text by analyzing their brain activity.

This could be a game-changer in more than one way. Improving and releasing a technology like this for public use would greatly benefit thousands of individuals who have lost their ability to speak. It could also further progress in making it possible to type out words just by thinking.

Decoding Speech From Brain Signals

Deciphering components of speech from brain activity isn’t exactly new; we’ve been able to do it for roughly a decade. But so far, our efforts haven’t resulted in consistent, applicable ways of translating intelligible sentences. Last year, a group of scientists used brain signals to animate a simulated vocal tract. But only 70% of its words were comprehensible.

At the University of California, San Francisco, researchers recently demonstrated that their AI system could translate brain signals into complete sentences with accuracy rates as high as 97% — above the professional speech transcription threshold.

The key to this endeavor’s markedly improved performance? Strong parallels between the translation of brain signals to text and machine translation of languages via neural networks, which now has high accuracy for numerous languages. With inspiration from the latter, the research team was able to take a novel approach to the problem and achieve unprecedented results.

Per a recent paper published in Nature Neuroscience detailing the research, “Taking a cue from recent advances in machine translation, we trained a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence.”

Tapping Into the Power of Context

The vast majority of efforts in decoding neural activity have concentrated on identifying brain signals corresponding to specific phonemes, the distinct units of sound that comprise words. Instead of doing this, the UC San Francisco team decided to mimic machine translation and translate the entire sentence at once. This strategy has proven to be profoundly powerful; since certain words are more likely to appear with each other, the AI system can leverage context to fill in any gaps.

Machine translation usually relies on an encoder-decoder approach to get the job done. Essentially, one neural network analyzes an input signal (typically text) to build a representation of the data. A second neural network then translates this information into another language. The researchers employed this exact same procedure. But instead of text, they used brain signals.

To train their system, the research team recruited four women who had electrodes implanted in their brains to monitor epileptic seizures. These participants read aloud a set of 50 sentences multiple times. Examples include “Those thieves stole 30 jewels,” and “Tina Turner is a pop singer.” Altogether, the set contained 250 unique words.

While the participants read these sentences, the researchers tracked their neural activity. Afterward, they fed this data to their machine learning algorithm, which converted the information for each spoken sentence into a string of numbers.

In testing, the system relied solely on the participants’ neural signals, not their spoken word. For two out of four of the subjects, it achieved error rates below 8%. This matches the accuracy generally expected of professional transcription services.

Some Caveats to Be Aware Of

Like any lab experiment with potential for real-world applications, there are some conditions worth noting. The system was only capable of decoding 30-50 specific sentences. And at 250 words, the vocabulary is quite limited. Also, people can only use it if they have electrodes implanted in their brains — something that’s only permitted for a few medical reasons.

During the tests, one main concern was that the AI was only being tested on sentences included in its training data. Therefore, it may be only learning how to match certain neural signatures with specific sentences. This means it’s not learning constituent elements of speech, decreasing its capability to work with unfamiliar sentences.

To figure this out, the team added another set of recordings to the training data that were not included in testing. This ended up significantly reducing error rates, which suggests that the system was indeed learning sub-sentence information correctly.

The researchers also discovered one other interesting insight: Pre-training the system on data from the participant with the highest accuracy before training it on data from one of the worst performers ended up also reducing error rates substantially. This means that, for practical applications, the majority of training could be completed before the system is even given to the end-user. After that, only fine-tuning for the user’s particular brain signals would be needed.

The matter of the system’s scalability remains elusive. Since the decoder relies on learning and using sentence structure to improve its predictions, each new word increases the number of possible sentences, which could result in reduced accuracy. The average English speaker’s lexicon is 20,000 words, so the system is quite far off from being able to understand regular speech.

Still, creative solutions could solve this potential roadblock. And even a small palette of 250 words can still be incredibly useful for a variety of medical applications for disabled people. It could also be tailored for a specific set of commands, allowing us to take “telepathic control” over devices.

It’s clear that this research holds considerable promise. But what do you think about this mind-reading technology? Would you use it in your day-to-day? Do you think it will eventually reach mainstream use? Let us know your thoughts in the comments below!

Tags: , , , , , , , , , , , , , , , , , ,