How AI-powered brain implants are helping an ALS patient communicate

Nearly a century after German neurologist Hans Berger pioneered the mapping of human brain activity in 1924, researchers at Stanford University have designed two tiny brain-insertable sensors connected to a computer algorithm to help translate thoughts to words to help paralyzed people express themselves. On August 23, a study demonstrating the use of such a device on human patients was published in Nature. (A similar study was also published in Nature on the same day.)

What the researchers created is a brain-computer interface (BCI)—a system that translates neural activity to intended speech—that helps paralyzed individuals, such as those with brainstem strokes or amyotrophic lateral sclerosis (ALS), express their thoughts through a computer screen. Once implanted, pill-sized sensors can send electrical signals from the cerebral cortex, a part of the brain associated with memory, language, problem-solving and thought, to a custom-made AI algorithm that can then use that to predict intended speech. 

This BCI learns to identify distinct patterns of neural activity associated with each of the 39 phonemes, or the smallest part of speech. These are sounds within the English language such as “qu” in quill, “ear” in near, or “m” in mat. As a patient attempts speech, these decoded phonemes are fed into a complex autocorrect program that assembles them into words and sentences reflective of their intended speech. Through ongoing practice sessions, the AI software progressively enhances its ability to interpret the user’s brain signals and accurately translate their speech intentions.

“The system has two components. The first is a neural network that decodes phonemes, or units of sound, from neural signals in real-time as the participant is attempting to speak,” says the study’s co-author Erin Michelle Kunz, an electrical engineering PhD student at Stanford University, via email. “The output sequence of phonemes from this network is then passed into a language model which turns it into text of words based on statistics in the English language.” 

With 25, four-hour-long training sessions, Pat Bennett, who has ALS—a disease that attacks the nervous system impacting physical movement and function—would practice random samples of sentences chosen from a database. For example, the patient would try to say: “It’s only been that way in the last five years” or “I left right in the middle of it.” When Bennett, now 68, attempted to read a sentence provided, her brain activity would register to the implanted sensors, then the implants would send signals to an AI software through attached wires to an algorithm to decode the brain’s attempted speech with the list of phonemes, which would then be strung into words provided on the computer screen. The algorithm in essence acts as a phone’s autocorrect that kicks in during texting. 

“This system is trained to know what words should come before other ones, and which phonemes make what words,” Willett said. “If some phonemes were wrongly interpreted, it can still take a good guess.”

By participating in twice-weekly software training sessions for almost half a year, Bennet was able to have her attempted speech translated at a rate of 62 words a minute, which is faster than previously recorded machine-based speech technology, says Kunz and her team. Initially, the vocabulary for the model was restricted to 50 words—for straightforward sentences such as “hello,” “I,” “am,” “hungry,” “family,” and “thirsty”—with a less than 10 percent error, which then expanded to 125,000 words with a little under 24 percent error rate. 

While Willett explains this is not “an actual device people can use in everyday life,” but it is a step towards ramping up communication speed so speech-disabled persons can be more assimilated to everyday life.

“For individuals that suffer an injury or have ALS and lose their ability to speak, it can be devastating. This can affect their ability to work and maintain relationships with friends and family in addition to communicating basic care needs,” Kunz says. “Our goal with this work was aimed at improving quality of life for these individuals by giving them a more naturalistic way to communicate, at a rate comparable to typical conversation.” 

Watch a brief video about the research, below:

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
A Reverse Journey through Geologic Time, a Tale of Wild Horses and Interspecies Kinship, and More thumbnail

A Reverse Journey through Geologic Time, a Tale of Wild Horses and Interspecies Kinship, and More

NONFICTION Life, Linked A reverse journey through geologic time shows the interconnectedness of Earth’s species Otherlands: Journeys in Earth’s Extinct Ecosystems by Thomas Halliday Random House, 2022 ($28.99) As a teenager, I was obsessed with dinosaurs, but I had little aptitude for what came before them. I couldn’t make sense of what John McPhee, in…
Read More
Vous souhaitez soutenir la vulgarisation scientifique et l'indépendance de Futura ? Rejoignez-nous sur Patreon ! thumbnail

Vous souhaitez soutenir la vulgarisation scientifique et l’indépendance de Futura ? Rejoignez-nous sur Patreon !

L'aventure Futura a commencé il y a 20 ans. Média indépendant depuis sa création, vous pouvez désormais offrir votre soutien sur Patreon ! En contrepartie, vous aurez droit à des avantages exclusifs tels que la navigation sans pub sur tout le site et la possibilité de participer à la vie de Futura ! Explications. Qu'est-ce que Patreon et comment ça…
Read More
Supervised discovery of interpretable gene programs from single-cell data thumbnail

Supervised discovery of interpretable gene programs from single-cell data

Overview of SpectraSpectra (https://github.com/dpeerlab/spectra) grounds data-driven factors with prior biological knowledge (Supplementary Fig. 1). First, Spectra takes in prior biological information in the form of cell-type labels and explicitly models separate cell-type-specific factors that can account for local correlation patterns. This explicit separation of cell-type-specific and global factors enables the estimation of factors at multiple
Read More
Geologists crack a major Stonehenge mystery thumbnail

Geologists crack a major Stonehenge mystery

Stonehenge in Wiltshire, England. The Neolithic site is believed to be about 5,000 years old. Geography Photos/Universal Images Group via Getty Images One of the many archaeological mysteries at Britain’s Stonehenge has been solved. The site’s six-ton Altar Stone hails from over 400 miles north in Scotland, not from Wales or other closer areas of
Read More
Drought might not be the driver behind disruption of Maya society thumbnail

Drought might not be the driver behind disruption of Maya society

Various studies suggested that the disruption of Classic Maya society in the Yucatan Peninsula of southeastern Mexico and northern Central America at the end of the ninth century coincided with extended droughts. Although climate change cannot fully account for the multifaceted, political turmoil of the period, it is clear that droughts of strong magnitude could…
Read More
Index Of News
Total
0
Share