NextMind is building a real-time brain computer interface

NextMind is developing a brain-computer interface that translates signals from the visual cortex into digital commands. We tried NextMind’s device, which lets you input commands into computers and AR/VR headsets with your visual attention. Eventually, the Paris-based startup wants to let you do the same with your visual imagination. We spoke with NextMind CEO Sid Kouider ahead of CES 2020, where his company is unveiling a dev kit shipping to select developers and partners this month for $399. After the early access period, a second limited run (waitlist) of dev kits will begin shipping in Q2 2020.

NextMind is part of a growing number of startups building noninvasive neural interfaces that rely on machine learning algorithms. In September 2019, Facebook acquired Ctrl-labs, which was developing an electromyography wristband that translates musculoneural signals into machine-interpretable commands. NextMind is also developing a noninvasive device, but it’s an electroencephalogram (EEG) worn on the back of your head, where your brain’s visual cortex is located.

“We are really focalizing directly on the cortex,” Kouider told VentureBeat. “We decode neural activity in the cortex noninvasively. And the goal is to reach real-time interaction by using just the brain directly.”

Two tracks

NextMind’s device requires that you must be actively looking at something, because then that something registers in your visual cortex. Any object you perceive induces a specific response in your visual cortex. This neural response is reflected as a distinctive fluctuation in the EEG. Your visual cortex does not just receive input from your eyes — it also amplifies the firing of neurons for the features you are intentionally attending. NextMind is currently made up of 15 people spread across software, machine learning, hardware, and game development.

“We have pretty good understanding of how the brain works and especially how visual consciousness, perception, attention works. This is what allowed us to invent this new approach that we call digital neurosynchrony. We use your top-down attention as a controller. So when you focalize differentially towards something, you then generate an attention of doing so. We don’t decode the intention per se, but we decode the output of the intention.”

As you look around and focus more on an object, “the output of your decision is going to be to amplify this information,” Kouider continued. “And then we know that you want to move that specific content or activate that specific visual content. There’s a neurosynchrony between the object and your brain. There’s a resonance between them. The more you focalize your attention, the more the resonance is going to increase, and the more the machine learning decoding is going to increase. Then we have pretty good evidence that this is the one you want to move.”

This is why Kouider doesn’t want you to call his device an eye tracker. Eye trackers don’t measure intent. Kouider held up a glass. As I look at him, he can’t tell whether I’m looking at his face or at the glass. But if I’m wearing a NextMind device, it can.

“We are decoding what you see around you, but it’s not like an eye tracker. It’s really like your visual focus. We are bringing our first product now to market, which is the first real-time brain computer interface.”

NextMind’s device doesn’t work when your eyes are closed, but a future version will. NextMind is working on two tracks in parallel. Visual intent is merely the first track.

“The second track is to decode visual imagination,” said Kouider. “It turns out that the visual cortex is the region which is both the input of what you receive from the external world, but it’s also the output of your memories, of your imagination, of your dreams. The same neurons in the visual cortex that are giving you the visual consciousness are the neurons that are used for processing information in the outside world.”

This won’t require a different device, Kouider claims. The two tracks will be available in tandem on the same hardware. Different software and algorithms will simply handle the different tasks.

“And in between those two extremes, the reception of information and imagination, there’s visual attention,” he continued. “The fact that you can, in a top-down manner, control whether you’re looking at this, or me, or my face. You can enhance some neurons that are specific to this from other neurons that are specific to my face. Visual attention is really very top-down. So it’s controlled by other brain regions like the prefrontal cortex. It’s used to enhance some pretty selective neurons and that’s what we are decoding here.”

The device

Before founding NextMind in September 2017, Kouider was a professor of cognitive neuroscience. He published papers in Science and Nature about visual consciousness and the brain. His work primarily focused on populations that couldn’t speak or express what they were thinking. His lab showed that babies were conscious of things around them and that people could learn while they were asleep via auditory content. After building up expertise decoding neural signals noninvasively, Kouider wanted to build a product that could do it all in real-time.

“When we do science, we average across a lot of trials. We do the analysis offline,” explained Kouider. “The goal of NextMind was really to do the same thing — decode visual consciousness from the visual cortex, but do it in real time. By doing it in real time, we could get to brain-computer interactions. And our goal really is to democratize it — in the sense that we want to be able to use information from the cortex, to decode it, and to have you use it in interaction with any display.”

NextMind’s device, which weighs about 60 grams, features eight electrodes that measure your brain activity. The startup settled on eight electrodes because that was the smallest number that didn’t risk losing any potential data. Wearing electrodes isn’t exactly comfortable, but a smaller design with potentially fewer electrodes is already in the works. The real breakthrough here is the material used for the electrodes themselves to capture more data for the machine learning algorithms.

“Here we really faced a machine learning problem,” said Kouider. “It pushed us to first innovate on this new material that has a much better sensitivity. It’s basically like an EEG, but this allows us to improve the SNR by about four times compared to clinical EEG, in addition to avoid having any gel. If you have an EEG at the hospital you need gel. On each electrode we have a microchip. We process the analog information directly.”

The electrodes are shaped like a comb so they can go through your hair and reach your scalp to get a good signal. (Kouider, who is bald, shaved his head soon after he started NextMind.)

“It’s supposed to work immediately, but it’s like using a mouse for the first time,” said Kouider. “So you need to learn slowly to sense your brain in action.”

Before I could use the device, which was hooked up to a laptop, I had to fasten it in such a way that NextMind’s software determined it was getting a high quality signal. Then I had to go through a calibration and training session to build a model of my neural profile. This mainly consisted of focusing on three green lines that moved closer together to form a triangle. This triangle would appear on different objects on the screen. After a few minutes, NextMind had generated about 1 to 10 megabytes of data that represents my neural profile. During the calibration process, I was asked to stay still and not talk in order to get a good model. Otherwise, the demos would not have been much fun.

Demos

The first use case I tried was a TV demo that NextMind has shown before (see the above video). Using NextMind’s device, I could control a TV by focusing on the green triangles in various areas of the custom TV user interface. I could change channels, play or pause the content, and mute or unmute the sound. I did this all just by focusing my attention on the corresponding green triangles.

The second demo was a platform game where I controlled a small cube. (I don’t have video of this, but picture a simple platform game.) I had a joystick to move and jump, but I could shoot enemies and push objects by focusing my attention. Any time there was a green triangle on the screen, I focused my attention on it to perform an action with my mind.

The third demo let me input a four-digit number with my attention. It was a slow process to focus on each number one by one (Kouider later showed that he could do it faster, since he had more practice.)

The fourth demo was a modified version of the classic game Duck Hunt. Using my visual attention was much easier than using the gun controller. In fact, it was so easy that I tried to make it harder by focusing on the ducks out of the corner of my eye. “As long as it ends up in your visual cortex, in principle, it should be decodable,” said Kouider. It was slower, but it still worked.

I took off my glasses. I couldn’t see any of the green triangles on any of the ducks. “It’s not only about the triangle, of course,” said Kouider. “The triangle is just the focus.” Without my glasses on, sometimes the wrong duck got killed. This is because the system works using an evidence accumulator. Sometimes it’s too liberal and can conclude you’re focusing on the wrong object.

The fifth demo, where I could change the colors of a lamp, was by far the most interesting. But first, this required some recalibration. I had to focus on one of the color blocks during the training session, and then I could switch the colors of the lamp in real time by focusing on each corresponding block.

It’s easy to envision how this could work in the second track. A developer could take a picture of a lamp, throw a shader on it, and attach an action to the image (like changing the light color of the physical lamp.) One day, owners of a NextMind device might be able to do this themselves and then simply imagine the picture they took. Software would take care of the rest.

How it all works

The trick here is that NextMind’s device is not only detecting a difference in color, but also on the little blocks representing each color. What you can’t see clearly in the video is that there’s a different pattern on each of the blocks. That’s what the device is decoding in my visual cortex. Thus, a colorblind person could theoretically do this demo too. “The visual cortex contains information about shape, orientations, borders, colors, and movement,” explained Kouider.

The same goes for the TV application. There are small grains in the background of each of the user interface buttons where the green triangles show up. The same goes for the pin pad demo. These grains appear and disappear slowly at different frequencies. Each number has a different temporal pattern. “We know that your brain is in synchrony with five and not six or four, because basically it’s responding preferentially to that specific pattern,” said Kouider.

“There’s going to come a point pretty soon, I believe, where we’re going to be able to even decode the shape by itself. You won’t need color and you will barely need the pattern.”

Why barely? “The pattern is helping us. I mean, your brain does it. It can differentiate between five and four. [The number] doesn’t have to slightly blink. So if your brain can do it, it means that the information is in there.”

In short, NextMind is keeping the pattern around so the evidence accumulator can build up more quickly. Eventually though, you’ll be able to lean in and out as you please, Kouider hopes.

“Maybe not with the SDK version one, but where you can focalize and you can also decide not to do anything. You can just decide how much you want to press it. It’s a little bit like putting your finger on the keyboard but not pressing.”

The demos were by no means perfect, but there was no doubt in my mind that the technology worked. That said, NextMind is still smoothing out the kinks and working on improving its hardware, software, and machine learning.

“We improved the hardware. We improved the machine learning, like the deep neural networks that we’re using. And it was a lot about improving the cognitive aspect like how to focalize your attention better, which kind of information needs your attention, if you want to make a flickering, what is the preferential frequency for your brain? These kinds of things.”

Kouider says his team spent a lot of time and effort developing cognitive tricks to tap into. Building faster and more efficient machine learning was part of the equation, but so was figuring out inventive ways and shortcuts to get the intended result. But neither of those two were the biggest obstacle.

“Hardware right now is the main limitation. But it’s not just our problem, it’s the problem of any imaging. You don’t want to put an MRI around people, which has a much better resolution but a crappy temporal resolution.”

Like any machine learning startup will tell you — it’s ultimately your data that limits how far you can get.

VR demo

NextMind designed its device so you can simply clip it onto the back of an Oculus VR headset. This works so well that the company won two awards at CES 2020 this week: “Best of Innovation in VR/AR” and “Honoree in wearable technologies.” Based on the demo I tried, these wins do not surprise me.

NextMind device on an Oculus VR headset

I had to go through another training session for a VR demo that involved focusing on the center of alien brains to explode them. Again, no talking during the calibration process. Calibration should eventually take less time because the system will estimate the confidence of the model in real time and stop when it’s satisfied.

Kouider noted that in VR, the resolution isn’t as high, so the objects that you could take action on by focusing your visual attention have to be bigger. Just like in the Duck Hunt game, sometimes I would look at one enemy and another would explode. The evidence accumulator has a tricky job when the resolution is worse and there are multiple enemies on the screen. There were no green triangles this time, but there was a lot of blinking.

“The blinking by itself is not important,” said Kouider. “There has to be a change in the display. It could be a change in color, for instance. Your brain has to process new information. We need to generate a neural response.”

Another object I could explode was a barrel, but it didn’t have the same shape as the alien brains. But it was blinking. Kouider hinted that there’s some form of transfer learning happening as you progress in the game.

“Don’t think about this as just a standalone device,” said Kouider. “It’s a modality. It’s an extra modality for people who want to do AR or VR.”

Killer use case

Again, all of NextMind’s demos work. You feel like you’re in control. “It’s not full control, yet,” said Kouider. “That’s what we’re working towards. This is going to come very quickly.”

So far though, there’s nothing here that you can’t already do yourself faster using your hands or another device. The hunt is on for a killer use case. So the company is shipping dev kits for two reasons. The first is to obtain more data to improve its machine learning algorithms. The second is that NextMind wants developers to try new use cases.

Kouider listed a few that have come out of discussions with developers, investors, and potential partners. The biggest surprise was that NextMind has had interest from autonomous car makers. They envision putting electrodes directly in the car seat. If there’s no steering wheel and you don’t have to put your hands anywhere, maybe passengers could control car features by leaning back in their seat. There’s no need to put your hands anywhere nor wear something.

Then there’s augmented reality. VR is further along, so it makes sense to start there. But what if this EEG technology could be built into a band behind a pair of glasses? “AR glasses are coming,” Kouider said. “So with a smaller version of that, it is going to be a potential use case. And again, it’s an extra modality. You can use it with gesture, you can use it with eye tracking.”

Another use case Kouider mentioned is neuromonitoring. Say you’re a pilot and an alarm goes off to indicate something is wrong. You don’t see it, don’t pay attention to it, and don’t process it. If the device detects that you haven’t acknowledged the alarm in say five seconds, it can adapt the alarm — make it more visible, for example. Neuromonitoring could be used to verify that someone is paying attention to a set of instructions.

“But for us it was important to do the control, not the monitoring, because we want to push the algorithms to be as fast as possible.” Plus, neuromonitoring doesn’t measure intent, which is arguably much more interesting.

“The main achievement really here is for the first time real-time brain interaction,” said Kouider. “But I really think that in three, four years from now, I’m going be able to close my eyes and think about my wife or my kids [to show a photo of them on screen]. It’s just going to come. Because in theory it works, it’s just a question of signal. Because we know how this works in the brain, but doing it in real time, that’s really tough.”



from Hacker News https://ift.tt/2MYQpZI