How can artificial intelligence help MS patients regain their voices?

How can artificial intelligence help MS patients regain their voices?


Technology is making great strides towards integrating the human mind into the machine. After the success of brain-computer interfaces (BCIs) in controlling devices and robots, many companies specializing in this field, such as Neuralink and Synchron , have now begun to achieve new achievements that will change the lives of millions of people around the world.

In recent days, Neuralink has received FDA approval for its new device, BlindSight , which aims to restore sight to the blind by directly stimulating the brain.

This revolutionary device aims to stimulate the visual areas of the brain using precise electrical signals. Elon Musk, the company's founder and CEO, explained that this device will help those who have completely lost their sense of sight and optic nerve to see again, provided that the area responsible for vision in the brain (the visual cortex) is intact.

Meanwhile, Synchron announced that its implant allowed a patient with amyotrophic lateral sclerosis (ALS) to control Amazon's Alexa voice assistant with his thoughts.

Using the implant, the patient could mentally tap icons on an Amazon Fire tablet, giving him access to a wide range of Alexa features, including viewing security cameras, making and answering video calls, and controlling a Fire TV by pointing a cursor with his brain. Synchron’s technology was life-changing for a person who had no use of his voice or limbs.

But what if AI, especially large language models, were used with brain implants to convert the signals recorded by those implants into voice commands in real time? Could this help people with brain injuries or neurological diseases like amyotrophic lateral sclerosis regain their voices?

Brain implants and artificial intelligence restore hope to MS patients:

A team of researchers at UC Davis Health has developed a new brain-computer interface (BCI) that can translate the thoughts of people with speech difficulties into understandable speech.

Thanks to advanced artificial intelligence models, this interface has achieved an amazing accuracy of up to 97%, so it represents a qualitative leap in the field of human-machine communication, and opens new horizons for treating neurological diseases that affect the ability to speak.

How do brain-computer interfaces work?

How can artificial intelligence help MS patients regain their voices?

Brain-computer interfaces are one of the most promising medical technologies of our time, as they seek to enable people who have lost vital functions due to injuries or neurological diseases to regain them.

This technology relies on networks of tiny electrodes that are implanted on the surface of the brain or inside its tissues to record the brain’s electrical activity, which is a series of electrical signals that transmit information between nerve cells. The computer then analyzes and interprets these signals to determine the activity that the patient intends to perform.

Initial research in brain-computer interfaces has focused on restoring movement, particularly arm and hand movement, however, loss of the ability to speak poses a greater challenge, especially for patients with neurological diseases such as amyotrophic lateral sclerosis.

But in recent years, researchers have made significant progress in the field of speech brain-computer interfaces, which can record the brain signals that form when a person tries to speak, convert them into written text using complex algorithms, and then convert this text into audible sound using text-to-speech technologies.

However, the development of these systems faced many challenges, the most prominent of which was that the AI ​​programs responsible for decoding brain signals needed a huge amount of data and training to learn how to accurately translate these signals. In addition, these programs had difficulty accurately distinguishing words, which led to errors in understanding the patient’s speech and obstacles to effective communication.

This technology can be likened to translating a complex foreign language. Just as a translator needs a large dictionary and extensive experience to accurately translate text, artificial intelligence programs need a huge amount of data and training to understand the language of the brain and convert it into understandable speech.

Researchers at the University of California, Davis Health, published in the New England Journal of Medicine , have shown that they have succeeded in overcoming previous challenges in the field of brain-computer interfaces, as they developed a new system based on a set of artificial intelligence models, capable of decoding the language of the brain with high accuracy and converting it into understandable speech.

The idea of ​​the system is to translate the patient's thoughts directly into audible words. This happens by implanting special electronic chips that capture speech signals in the patient's brain and convert them into text that appears on the computer screen. The computer then reads the text aloud in a voice similar to the person's voice before he was afflicted with the disease.

To test the system, the team enrolled Casey Harrell, a 45-year-old man with ALS, in the BrainGate clinical trial. At the time of his enrollment, Harrell was quadriplegic, had difficulty understanding speech, and needed help translating it.

 In July 2023, the patient had 4 sets of microelectrodes implanted in the brain, specifically in the left central gyrus, the area responsible for coordinating speech. These sets were designed to record brain activity, and consist of 256 electrodes implanted in the cerebral cortex to capture brain signals with high accuracy.

“We capture subtle neural signals that reflect the patient’s attempts to move his muscles and produce speech,” explains Dr. Sergey Staviski, assistant professor in the Department of Neurosurgery and co-principal investigator of the study. “These signals are recorded from the area of ​​the brain responsible for moving the speech muscles, and then we translate these complex neural patterns into symbols and then into understandable words.”

Previous brain-computer interface systems that translate thoughts into speech had a major problem: frequent errors in identifying words, which made communication difficult and unreliable.

"Our goal with this project was to develop a more accurate system that would allow the user to express themselves clearly at any time they wished," explained David Brandman, a neurosurgeon and co-investigator on the study.

Harrell tested the new system in several scenarios, including casual conversations and specific requests. In each case, the system, with the help of several machine learning models and large language models, decoded brain signals and converted them into written words on the spot.

Furthermore, the system read these words aloud in Harrell's own voice tone before he became ill, achieved by software trained on existing samples of his voice before he became ill.

Impressive results:

In record time, the system achieved high speech recognition accuracy, outperforming many commercially available systems. During the first training session on speech data, the system achieved an astonishing 99.6% accuracy in recognizing 50 different words in just 30 minutes. As the vocabulary size increased to 125,000 words in the second session, the system needed an additional 1.4 hours of training to achieve 90.2% accuracy.

By continuing the process of collecting data and training the system, it was able to maintain a high accuracy of 97.5%, which represents a qualitative leap in the field of speech recognition systems.

Promising future:

This achievement represents a turning point in the field of brain-computer interfaces, and opens new horizons for the treatment of many other neurological diseases, such as stroke and multiple sclerosis. As technology continues to advance, we can expect new generations of brain-computer interfaces to emerge that are smaller and more efficient, making this technology available to a greater number of patients.

In conclusion, brain-computer interfaces are one of the most significant developments in biomedicine in recent decades, and this technology has proven its ability to improve the quality of life for patients with severe disabilities, giving them hope for a more independent and connected future.


google-playkhamsatmostaqltradent