This headset lets you talk to your devices with your thoughts

MIT researchers have created a wearable computer interface that can process words that the user verbalises internally but does not actually speak aloud and can respond silently through the bones in their face.

Electrodes in the device, called ‘AlterEgo’, pick up neuromuscular signals in the jaw and face that are triggered by internal verbalisations — saying words ‘in your head’ — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system, said: “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

Once they had selected the four locations to attach the electrodes along the jaw, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalise large addition or multiplication problems; another was a chess application, in which the user would report moves using the standard chess numbering system. The electrodes feed the data back to a neural network that finds correlations between particular neuromuscular signals and particular words.

Using the prototype wearable interface, 10 subjects spent about 15 minutes each customising the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92%.

In fact, Kapur estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

The researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur said. “I think we’ll achieve full conversation someday.”