Facebook stops funding the brain to read computer interfaces

[ad_1]

Now the answer is in it-and it’s not close at all. Four years after announcing the “crazy and amazing” project using optical technology to build a “silent voice” interface to read thoughts, Facebook shelved the project, saying that consumers’ brains are still far away from reading.

in A blog post, Facebook said it would stop the project and instead focus on experimental wrist controllers for virtual reality. Read arm muscle signals“Although we still believe in the long-term potential of head-mounted optics [brain-computer interface] Technology, we decided to focus our current efforts on a different neural interface method that has a more recent path to market,” the company said.

Facebook’s brain typing project takes it into uncharted territory-including Funding for brain surgery In a hospital in California, and made a prototype helmet that can shoot light through the skull — and sparked a heated debate about whether technology companies should access private brain information. However, in the end, the company seems to have decided that this research will not produce a product anytime soon.

“We have gained a lot of hands-on experience in these technologies,” said physicist and neuroscientist Mark Chevillet. He did not lead the silent speech project until last year, but recently turned to studying how Facebook handles elections. “That’s why we can confidently say that as a consumer interface, head-mounted optical silent voice devices have a long way to go. It may be longer than we expected.”

Mind reading

The reason for the craze surrounding brain-computer interfaces is that companies see mind control software as a huge breakthrough—just as important as a computer mouse, a graphical user interface, or a sliding screen. More importantly, the researchers have demonstrated that if they place electrodes directly in the brain to tap individual neurons, the results are very significant.Paralyzed patients with this “implant” can Move the robotic arm dexterously with play video game or Types of Through mind control.

Facebook’s goal is to translate these findings into consumer technology that anyone can use, which means you can put on and take off your helmet or headset. “We never intended to make brain surgery products,” Chevillet said. In view of the many regulatory issues of this social giant, CEO Mark Zuckerberg once said that the last thing the company should do is knock the head. “I don’t want to see Congress hearings,” He joked.

In fact, with the development of brain-computer interfaces, serious new problems have emerged. What would happen if large technology companies could understand what people think?In Chile, legislators are even considering a human rights bill to protect Brain data, free will and psychological privacy From a technology company. Given Facebook’s poor record on privacy, the decision to stop this research may have some collateral benefits, namely keeping a certain distance between the company and growing concerns about “nerve rights.”

Facebook’s project specifically targets brain controllers that can be combined with its ambitions in virtual reality; it acquired Oculus VR for $2 billion in 2014. Chevillet said that in order to achieve this goal, the company took a two-pronged approach. First, it needs to determine whether a thought-to-speech interface is even possible. To this end, it sponsored a study at the University of California, San Francisco, where a researcher named Edward Chang placed electrode pads on the surface of people’s brains.

Although the implanted electrodes read data from a single neuron, this technique called electrocorticography or ECoG can simultaneously measure a considerable group of neurons. Chevillet said that Facebook hopes it will also be possible to detect equivalent signals from outside the head.

The UCSF team has made some surprising progress and reported in the New England Journal of Medicine today that it uses these electrode pads to decode speech in real time. The subject of the study was a 36-year-old male who the researchers called “Bravo-1”. After a severe stroke, he lost the ability to form understandable words and could only grumble or moan. In their report, Chang’s team stated that Bravo-1 has been able to compose sentences on the computer at a rate of 15 words per minute due to electrodes on the surface of his brain. This technique involves measuring nerve signals in parts of the motor cortex related to Bravo-1’s efforts to move the tongue and vocal tract while imagining speaking.

To achieve this result, Chang’s team asked Bravo-1 to imagine saying one of 50 commonly used words nearly 10,000 times and inputting the patient’s neural signals into a deep learning model. After training the model to match words with neural signals, the team was able to correctly determine the words Bravo-1 wanted to say 40% of the time (the probability of the result was about 2%). Nevertheless, his sentence is full of errors. “How are you?” You might say “I’m hungry, how are you”.

But scientists improve performance by adding a language model—a program that determines which word sequences are most likely to appear in English. This increases the accuracy rate to 75%. With this cyborg approach, the system can predict that Bravo-1’s sentence “I am correct to my nurse” actually means “I like my nurse.”

As striking as the result, there are more than 170,000 words in English, so beyond the limited vocabulary of Bravo-1, performance will plummet. This means that although the technology may be useful as a medical aid, it is not close to Facebook’s ideas. “In the foreseeable future, we see the application of clinical assistive technology, but this is not where our business is,” Chevillet said. “We are focused on consumer applications and there is still a long way to go.”

A device developed by Facebook for diffuse optical tomography that uses light to measure changes in blood oxygen in the brain.

FACEBOOK

Optical failure

Facebook’s decision to quit brain reading is not shocking to researchers studying these technologies. “I can’t say that I was surprised because they hinted that they are thinking about the short-term and intend to re-evaluate things,” said Mark Slutsky, a professor at Northwestern University, whose former student Emily Munger is a key Facebook employee. “From experience only, the goal of decoding speech is a big challenge. We are still a long way from a practical, all-encompassing solution.”

Nonetheless, Slutsky said that the UCSF project is an “impressive next step,” which demonstrates the extraordinary possibilities and limitations of the science of brain reading. “Whether you can decode free-form speech remains to be seen,” he said. “A patient who said’I want to drink water’ and’I want my medicine’-this is different.” He said that if the artificial intelligence model can be trained for a longer time, and not only in one person Training on their brains, then they can be quickly improved.

While the University of California, San Francisco research is underway, Facebook also paid other centers (such as the Johns Hopkins University Applied Physics Laboratory) to study how to pump light into the skull to read neurons noninvasively. Much like MRI, these technologies rely on sensing reflected light to measure blood flow to the brain area.

It is these optical technologies that are still a bigger stumbling block. Even with some recent improvements, including some improvements by Facebook, they cannot acquire neural signals with sufficient resolution. Chevillet said another problem is that the blood flow detected by these methods occurs 5 seconds after a group of neurons is activated, which makes the control of the computer too slow.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker