It has been eight years since the patient with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s syndrome, lost her ability to speak. Due to her ALS paralysis, she can only make sounds, her words have become incomprehensible. In order to communicate, she needed a whiteboard or an iPad.

But that has now changed. The woman, who lives in the USA, had volunteered for a new type of brain-computer interface (BCI). Now she can communicate phrases like “My house doesn’t belong to me” or “It’s just difficult” at a pace that is close to normal usage.

This emerges from a paper that a team from Stanford University is currently on a preprint server has published. The study is yet to be peer reviewed, but the scientists claim that their subject, referred to simply as “T12,” broke previous records with the implanted interface. The implant allows communication at a speed of 62 words per minute, three times faster than the previous record. Philip Sabes, a researcher at the University of California at San Francisco who was not involved in the project, called the study a “major breakthrough” and says such experimental methods will soon leave the laboratory. Marketing is conceivable.

According to Sabes, the performance is at a level that many of those affected who can no longer speak would like to have. “People will want that.” For comparison, people without language deficits typically speak about 160 words per minute. And even in the age of keyboards, quick-typing on phones, emojis, and abbreviations, direct speech remains the fastest form of human-to-human communication.

The study generated a lot of interest on Twitter and other social media – also because one of the main authors, Krishna Shenoy, died of pancreatic cancer on the day the preprint was published. Shenoy had devoted parts of his career to improving the speed of communication across brain interfaces, and he was collecting them himself current records on the web. In 2019, another subject Shenoy worked with managed to “type” up to 18 words per minute via brain interface, a record at the time.

The brain-computer interfaces Shenoy’s team works with consist of a small pad of pointed electrodes embedded in a person’s motor cortex, a brain region primarily involved in movement. With the system, the researchers can record the activity of a few dozen neurons simultaneously and find patterns that reflect what movements someone is thinking about, even when the person is paralyzed.

In previous work, paralyzed volunteers were first asked to imagine hand movements. By decoding their neural signals in real time, implants allowed them to control a cursor on a screen, select letters on a virtual keyboard, play video games, and even control a robotic arm. In the new study, however, the Stanford team wanted to know whether activity in the motor cortex can also provide information about movements related to language. That is, can you detect how “T12” tries to move her mouth, tongue, and vocal cords as she tries to speak?

These are small, subtle movements. And according to Sabes, a key discovery by the Stanford team is that just a few neurons provide enough information to allow a computer program to predict with good accuracy what words a patient is trying to say. This information was then transferred to a computer screen by Shenoy’s team. A computer voice then reads them out.

Shenoy and co. draw on earlier work by Edward Chang at the University of California at San Francisco (UCSF), who considers speech to be the most complex movement that humans can perform. Words are formed with mouth, lip and tongue, working with air and vibrations. For example, to make the sound “F,” you place your upper teeth on your lower lip and exhale—just one of dozens of mouth movements required to speak.

Chang has previously only used electrodes placed on the brain to allow a subject to speak through a computer, but in their preprint, the Stanford researchers say their system is more accurate and three to four times faster. “Our results suggest a viable way to enable people with paralysis to communicate at normal speeds,” write the researchers, who include the late Shenoy and neurosurgeon Jaimie Henderson.

David Moses, who works with Chang’s team at UCSF, says the work is reaching “amazing new standards of performance.” But even as records continue to be broken, Moses said, “it will become increasingly important to demonstrate stable and reliable performance over a number of years.” The quality of the data deteriorates over time because scarring can occur in the brain. Whether the regulatory authorities will allow such an implant for commercial use is not said.

The way forward will likely include both more sophisticated implants and tighter integration with artificial intelligence. The current system already uses some types of machine learning. For example, to improve accuracy, the Stanford team used software that predicts what word in a sentence will typically come next. For example, “I” is more often followed by “am” in English than “ham”, even though these words sound similar and could create similar patterns in a person’s brain. By adding a word prediction system, the subject was able to speak faster and with fewer mistakes.

Newer large language models such as GPT-3 are now able to write entire essays and answer questions. By connecting these models to brain interfaces, people could speak even faster because the system can better guess what they’re trying to say based on partial information. “The success of large language models in recent years makes me believe that such a speech prosthesis is within our grasp, because you might no longer need such impressively good input to read speech,” says Sabes.

Shenoy’s group is part of a consortium called BrainGate, which put electrodes in the brains of more than a dozen volunteers. They’re currently using an implant called the Utah Array, which is a rigid square of metal with about 100 needle-like electrodes. Some companies, including Elon Musk’s brain interface company Neuralink and a start-up called Paradromics, say they have developed more advanced interfaces that can capture thousands or even tens of thousands of neurons at once.

While skeptics still question whether measuring (significantly) more neurons at once makes a difference, the new study suggests it might – especially when it comes to reading out complex things like language in the brain. The Stanford scientists found that the more neurons they read at once, the fewer errors they made in understanding subject “T12”. “This is an important finding because it suggests that efforts by companies like Neuralink to get 1,000 electrodes into the brain will make a difference when the task is massive,” said Sabes, who was previously a senior scientist at Neuralink has worked.




(jle)

To home page

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply