By analyzing nerve signals, a brain-computer interface (BCI) can now almost synthesize a person’s speech almost immediately, who lost its voice using a neurodygenetrable disease, finds a new study.
Researchers take care that before such a device this would be a long time, which can restore the speech for Pangu patients, will be used in everyday communication. Nevertheless, it is hoped that the task is “will lead a route to improve these systems – for example, through technology transfer for industry,” Maitreya VairagkarA project scientist at California Davis’s Neuroprostatics Lab University.
Brain-computer is restoring a major potential application capacity for interfaceCommunicate with those who can no longer speak due to illness or injury. For example, scientists have developed many BCIs that can help translate nerve signals into lessons.
However, only the lesson fails to capture many major aspects of human speech, such as the difference, which helps express the meaning. In addition, the lesson-based communication is slow, says disinterest.
Now, researchers have developed that they are called a brain-to-surprise neuroprostisis that can decode nerve activity in sounds in real time. They expanded Their conclusions June 11 in Journal Nature,
“It is disastrous to lose the ability to speak due to neurological disease,” wire says. “Developing a technique that can bypass damaged routes of the nervous system to restore speech can have a major impact on the lives of people with speech loss.”
Nerve mapping for speech restoration
The new BCI mapped nerve activity using four microelectrode arrays. Overall, scientists placed 256 microelectrode arrays in three brain regions, the major ventral presentral gyrus of them, which plays an important role in controlling the underlying speech to the muscles.
“This technique ‘does not read the brain’ or ‘read internal thoughts’,” Vairagkar says. “We record from the region of the brain that controls the speech muscles. Therefore, the system only creates voice when the participant tries to speak voluntarily.”
Researchers implanted BCI in a 45 -year -old volunteer amyotrophic lateral sclerosis (ALS), neurodigenerative disorder is also known as Lu Geharig Disease. Although the volunteer could still generate vocal sounds, he was unable to give a sensible speech on his own for years before BCI.
Neuroprosthesis recorded nerve activity, resulting in the patient attempted to read the sentences loudly on the screen. Scientists then trained a deep learning AI model on this data to produce their intended speech.
Researchers also trained a voice-cloning AI model on a patient before their position, so that the BCI could synthesize the voice of its pre-ULE. The patient reported that hearing the synthesized voice “I felt happy, and it felt like my real voice,” the study notes.
https://www.youtube.com/watch?v=fdfl5p4n6vcNeuroprosthesis reproduces a man’s speech againUC Davis
In experiments, scientists found that BCI could detect major aspects of intended vocal intonation. He had a patient to speak the set of sentences either as a statement, with no changes in the pitch, or as questions, which included the growing pitch at the end of the sentences. He also had that the patient emphasizes one of the seven words in the sentence “I never said that he stole my money“By changing its pitch. (There are seven different meanings of the sentence, on which the word has been emphasized.) These tests showed that to the ends of the questions and before emphasizing the words, to ask the patient to control the voice of his BCI, emphasize specific words in a sentence or to sing three-pickers.
“Not only what we say, but also how we say, it is equally important,” wire. “Our speech interval helps us communicate effectively.”
In all, new BCIs can acquire nerve signals and produce sounds with a delay of 25 milliseconds, enabling close-to-instantaneous speech synthesis, saying disrespect. BCI also proved to be sufficiently flexible to speak pseudo-word, as well as gaps such as “ah,” “EWW,” “Oh,” and “Hamm”.
The resulting voice was often intelligent, but it was not continuously. In the tests where human listeners had to transfer BCI words, they understood what the patient said about 56 percent of the time, when they did not use BCI, from about 3 percent since they did not use BCI.
Nerve recording of BCI participant shown on screen.UC Davis
“We do not claim that this system is ready to use to speak and is interacted by someone who has lost the ability to speak,” says disinterest. “Rather, we have shown evidence of the concept of what is possible with the current BCI technology.”
In the future, scientists plan to improve the accuracy of the device – for example, with more electrodes and better AI models. She also hopes that BCI companies can initiate clinical tests by incorporating this technology. “It is still unknown whether this BCI will work with those who are completely closed” – it is completely paralyzed, save for eye speed and blink, disinterest.
Another interesting research direction is to study whether such speeches can be useful for people with BCI language disorders, such as Speechless“Our current targeted patient population cannot speak due to muscle paralysis,” says disinterest. “However, their ability to produce language and feeling remains intact.” On the contrary, she notes, future work can check the restoration of speech to people who harm the regions of the brain, who produce speech, or prevent them from speaking from childhood with disability.
From your site articles
Related articles around web