By adding some electrodes and some electronics to your earbuds, Wisear can make your music experience a lot more hands-free than before. Bite down on your own teeth twice to pause a track, or three times to skip to the next tune — without making a noise, hand gesture or much visible movement at all, the technology enables you to interact with your music players or AR/VR headsets without needing to nudge any switches. The founders are envisioning that this is particularly helpful in scenarios where you have your hands full, or if it’s too loud for typical voice commands.
The company today revealed that it raised a total of €2 million (around $2.5 million), with a goal of licencing its technology to existing headset and headphone manufacturers. The round was led by Paris Business Angels and Kima Ventures, with the support of BPI France.
Wisear showed me its neural interface: By using the aforementioned electrodes to record the brain and facial activity, its patent-pending AI technology transforms these signals into controls that let the user take actions. The company is pretty skeptical of its competitors, and suggests that other “control by thought” startups are trying to pull the proverbial wool over our eyes.
“Anyone that tells you today that they’re doing thought control or mind control, or whatsoever, are basically twisting the truth for you,” explains Yacine Achiakh, co-founder at Wisear, “If they really have something, then honestly take all your money and give it to them, because it will revolutionize everything. It was quite frustrating for us: We realized people that were saying mind control either had a demo that would work in a very specific setup, when there is no noise around; people are not moving; it’s sunny outside; and it’s the right temperature.”
To overcome the “it works in the lab” syndrome, the company went back to the drawing board, and created a new set of tech with off-the-shelf components. The idea is to build a prototype of the technology that works well enough to show it off, and then licence the tech to headphone and AR/VR headset manufacturers.
“We realized that the hardest part in trying to do anything brain based was actually generalizing it across users and making it work in any environment. We took a step back and we decided that the neural interface would first be based on muscle and eye activity. The main controls we have are based on jaw activity,” says Achiakh “We have sensors on the earpiece that can capture your jaw muscle movement, and transform it into controls. You don’t need to make any noise whatsoever. And our goal for 2022 is to have two controls: double and triple clench of the jaw. The goal is to scale this to 12 controls in the next three years.”
The company’s founder showed off the company’s tech on a video call last week, and it was, in a word, impressive. The headphones were not confused by noises, movement or anything else the Achiakh did while speaking with me. When biting down on his own teeth — jaw clenching, you might call it — the audio player paused and resumed the demo music.
The tech isn’t quite ready for prime time yet, but the success rate is rather high.
“We are building the first technology that really works for everyone. At our booth at CES, we managed to have the demo work for around 80% of people who tried it — and we are working to improve that further,” says Achiakh. “We are building the only neural interface that can work today. Muscular activity is the real new interface that you can build in 2022.”