Researchers are converting brain signals into speech

Researchers are converting brain signals into speech

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Can you speak again with the help of artificial intelligence?

Some particularly serious illnesses ultimately lead to the loss of the ability to speak. Researchers have now successfully restored the language of test subjects by reading out the brain activity of the subjects and generating entire spoken sentences from them.

In a recent study by the University of California San Francisco, subjects' speech could be successfully restored by reading whole sentences directly from the brain using a new technology and then converting them into language. The results of the study were published in the English-language journal "Nature".

The process is complex and invasive

“For the first time, a study shows that we can generate whole spoken sentences through a person's brain activity. This is encouraging proof that with the technology that is already within reach, we should be able to build a device that is clinically suitable for patients with speech loss, ”said study author Dr. Edward Chang of UC San Francisco in a press release. The process is a complex and invasive process that does not decode exactly what the subject thinks, but what it actually said.

Where did the spoken words come from?

Subjects participated in the experiment, which was led by the linguist Gopala Anumanchipalli, and had large implants of electrodes for another medical procedure already implanted in the brain. The researchers had these people read out several hundred sentences while precisely recording the signals captured by the electrodes. The study's authors were able to identify a specific pattern of brain activity that occurs when the subjects think of words and assign them to specific cortical areas before the final signals are sent from the so-called motor cortex to the tongue and mouth muscles. A kind of intermediate signal can be used to reconstruct the language, the researchers say.

Virtual voting system of the person concerned was created

Through direct analysis of the audio material, the team was able to determine which muscles and movements are affected, and a kind of virtual model of the person's voice system was then built from this. The researchers then transferred the brain activity recorded during the session to this virtual model using a machine learning system. This essentially allowed brain activity to be recorded, which is used to control mouth pronunciations.

System determines which words are formed by facial muscles

It is important to understand that this process does not turn abstract thoughts into words. Rather, it understands the specific instructions given by the brain to the facial muscles and determines from the known words which words would form these movements. The resulting synthetic language is not crystal clear, but is understandable. When set correctly, the system may be able to pronounce 150 words per minute by someone who is otherwise unable to speak.

Accuracy achieved is an amazing improvement

In the future, work should be done on the possibility of perfectly imitating the spoken language. Nevertheless, the accuracy achieved within the study is an astonishing improvement in real-time communication compared to the options currently available. For comparison: An affected person with a degenerative disease of the muscles, for example, often speaks by spelling words individually with their eyes. This leads to about five to ten words per minute, with other methods being even slower for disabled people. It's kind of a miracle that people can speak again, but this time-consuming and less natural method is far from the speed and expressiveness of real language.

More research is needed

If a person were able to use the newly developed method, it would come much closer to ordinary language, if at the expense of perfect accuracy, the researchers say. The problem with this method is that very carefully collected data is required by a healthy speech system, from the brain to the tongue. For many people, it is no longer possible to collect this data, and for others, the invasive collection method is certainly not recommended by doctors. The good news is that the results are at least a start and there are many conditions where the new system would theoretically work. Collecting this critical brain and voice recording data could also be preventive if a stroke or degeneration is feared. (as)

Author and source information

Video: Computer Software Reads, Translates Brain Signals Into Speech (February 2023).