Honda Announces Brain-controlled Asimo Robot
Three Japanese firms jointly developed a basic technology to control a robot by using sensors to detect electrical signals from the brain when a person is thinking about something.
The firms, Honda Research Institute Japan Co Ltd, Honda Motor Co Ltd's R&D subsidiary, the Advanced Telecommunications Research Institute International (ATR) and Shimadzu Corp, made this announcement at a press meeting March 31, 2009.
For example, if the person imagines moving his or her left hand, the thought of the brain is converted into electrical signals. Then, the new technology makes "Asimo," Honda's two-legged robot, raise its left hand by using these signals to control the robot, the companies said.
Also, if an operator thinks about moving his/her right hand, both legs or tongue, Asimo acts correspondingly (in the case of tongue, Asimo acts as if it were eating something) though they were not demonstrated this time.
The percentage that the robot can correctly recognize brain images of the four motions reached 90%, the highest level in the world. Honda said the previous highest percentage was 66.0% (for recognizing four different kinds of signals).
The system uses an electroencephalograph (EEG) and near-infrared spectroscopy (NIRS) to read brain images. The EEG is used to measure electric signals generated by brain activities, while the NIRS measures the changes in cerebral blood flow by using near-infrared rays. Both the EEG and NIRS used for the new system are attached on the head and require no surgical treatment.
The system uses a computer to analyzes the data on the changes in the brain that are provided separately by the EEG and NIRS. And it compares them with data patterns that are characteristic to the four different motions to figure out which motion is being imaged in the brain.
The three companies co-developed the method to measure the changes in the brain by using the data acquired by the EEG and NIRS in parallel as well as the technology to extract information needed to determine the content being imaged from these two different kinds of data. This is the first technique of this kind in the world, the companies said.
The companies plan to increase the number of images, which is only four at the moment, from now on. Their goal is to develop, for example, a system to open a car trunk just by thinking, "I want to open the trunk," and a system that makes a robot water plants just by wishing it to do so.
"We are still in the phase of studying the basic technology at the moment," Honda Research Institute said. The companies said there are still a number of hurdles to overcome.