The journey
I’ve been obsessed with the human brain.
As a software engineer looking to learn more about this incredible organ, I’ve found myself using the tools I know (code), to try to understand more of what’s going on inside our minds.
My early experiments involved visualizing brainwaves (EEG) in the browser. My thinking was, if I could visualize it, I could better understand it. From there, I could start exploring behavioral experiments. And that is how my journey to connecting the brain to the browser began.
This journey has allowed me to try many different brain-computer interfaces including OpenBCI, NeuroSky, Muse — you name it. I explored many data acquisition and transmission standards such as Node and serial port, Node and Bluetooth, Web Sockets, WiFi, MQTT and Web Bluetooth. And I built many prototypes utilizing UI technologies such as Angular, React, RxJS, vanilla JavaScript, dozens of data visualizations libraries for SVG, canvas, WebGL, and even pure CSS. It has been a roller coaster, but every time I tried a new combination, it steered me closer to better results and most importantly, it made me a better engineer.
The experiment
From all the crazy ideas about the potential uses of brainwaves, I kept going back to the thought of “mind-controlling” stuff. And by this, I mean attempting to steer the brain frequencies and use these changes to detect intent. For example, it is known that during meditation, the alpha waves produced by your brain increase. The same way beta waves are associated with active thinking and concentration.
During the first experiment, I was able to sharpen a blurred image based on concentration levels. The more you focus, the clearer the image gets.
But, what if we could use meditation levels to control a sequence of images? If given a video of a flower blooming (starting with a bud), could we make a flower bloom if we reach a deep state of mindfulness while meditating? That got me thinking. If we could map some mind states to certain UI controls on the web, like a video player, that would be a fun experiment.
Let’s go through how we can capture brainwaves, get meditation and attention levels, send the data to the browser, and map it to the playback of a video element. In other words, let’s build a mind-controlled HTML5 video!
- Brain-Computer Interface: Neurosky MindWave Mobile
- Data Acquisition & Transmission: Bluetooth / Node / Web Sockets
- User Interface: Angular / RxJS / HTML5 Video
The brain-computer interface

Neurosky MindWave Mobile
The MindWave is a single channel, Bluetooth headset. One of the reasons I like experimenting with the MindWave is because other than being very affordable, unlike many other headsets, this one features hardware embedded algorithms called eSense. These algorithms rate attention and meditation levels from 0 to 100 by computing time and frequency domains, including alpha and beta waves. And these are exactly the metrics we’ll be using to control UI elements such as the HTML5 video element.
The data acquisition & transmission
I always like to start projects that interact with brain-computer interfaces by getting the hardware setup and ready to stream. So in order to do that, let’s download the ThinkGear Connector. The ThinkGear Connector runs continuously in the background. It keeps an open socket on the local user’s computer, allowing applications to connect to it and receive information from the MindWave headset.
You can find the connector on the NeuroSky website. Make sure to download v4.1.8. It only works with that version (don’t ask me why).
Next, we’ll use Node. Luckily, there’s already a Node library for accessing the connector’s open socket. Other than that, we’ll be using RxJS and Socket.io.
const { createClient } = require('node-thinkgear-sockets'); const { fromEvent } = require('rxjs/observable/fromEvent'); const client = createClient(); client.connect(); fromEvent(client, 'data') .subscribe(eeg => console.log(eeg));
Before running the code above, make sure to turn on the headset! You should see a blue light indicating the headset is ready to be paired. Now by running the code, the connector should automatically pair with the headset and Node will be able to connect to it. The output stream on your terminal should display many samples like this one:
{ eSense: { attention: 25, meditation: 5 }, eegPower: { delta: 207587, theta: 6366, lowAlpha: 2436, highAlpha: 2184, lowBeta: 2151, highBeta: 5612, lowGamma: 458, highGamma: 139 }, poorSignalLevel: 50 }
It is important to note new data will be received every second. Now we just need to add Socket.io and start emitting the data to the browser as we receive it from the headset. Two important things to notice are the socket port (4501) and the socket event name (metric/eeg). We’ll need these later.
const { createClient } = require('node-thinkgear-sockets'); const { fromEvent } = require('rxjs/observable/fromEvent'); const io = require('socket.io')(4501); const client = createClient(); client.connect(); fromEvent(client, 'data') .subscribe(eeg => io.emit('metric/eeg', eeg));
That’s all the code we’ll need in order to get the data and send it to the browser.
The user interface
Let’s start by creating a new project. If you are using the Angular CLI, which I highly recommend, just enter the following commands to the terminal. These commands will create a new Angular project, serve the app locally, and add a component shell called MindVideoPlayerComponent.
> ng new mind-controlled > cd mind-controlled && ng serve > ng generate component mind-video-player
The new component should look something like this.
import { Component } from '@angular/core'; @Component({ selector: 'mind-video-player', templateUrl: './mind-video-player.component.html', styleUrls: ['./mind-video-player.component.css'] }) export class MindVideoPlayerComponent { /* ... */ }
Let’s start by adding some properties to the MindVideoPlayerComponent class. We’ll need the name of the metric we’ll be using to control the video player (in this case meditation). We’ll also need video metadata including url, type, length in seconds and fps (frames per second). We’ll use these values later in our template and for some of the business logic.
metricName = 'meditation'; video: any = { url: './assets/videos/flower.mp4', type: 'video/mp4', length: 5, fps: 60 };
Next, let’s bring some RxJS observable types and the Socket.io client to our component.
import { fromEvent } from 'rxjs/observable/fromEvent'; import * as io from 'socket.io-client';
The socket client uses the event pattern. Let’s create an observable from its events by passing the Socket.io client as the event target and metric/eeg as the event name. We’ll call the observable stream$. The dollar sign suffix is just for semantics and indicates that we dealing with an observable type.
stream$ = fromEvent(io('http://localhost:4501'), 'metric/eeg');
Then we import the map operator and pipe it in the stream observable in order to pick the metric we want to work with (in this case meditation).
import { map } from 'rxjs/operators'; metricValue$ = this.stream$.pipe( map(({ eSense }) => eSense[this.metricName]) );
Now we have a stream of meditation values ranging from 0 to 100, which are received every second. The next step is to manually animate the video playback. There’s nothing out there that can animate the playback between two different times because this is not the way people traditionally interact with video on the web. So, we’ll need to get a little bit creative in order to tackle this next challenge.
The idea is to create a range between the latest metric value and the previous metric value. That will slowly update the video playback to every value in between over a period of time (one second). This is the same length of time we receive a new value from the server. So by the time the animation is completed, we get new data and do it all over again.
For this, we’ll need some observable types and operators, as well as the linspace library.
import { fromEvent } from 'rxjs/observable/fromEvent'; import { interval } from 'rxjs/observable/interval'; import { from } from 'rxjs/observable/from'; import { switchMap, scan, zip } from 'rxjs/operators'; import linspace from 'linspace'; currentTime$ = this.metricValue$.pipe( scan(([, prev], next: number) => [prev, next], [0, 0]), switchMap(([ prev, next ]) => from(linspace(prev, next, this.video.fps)).pipe( zip(interval(1000 / this.video.fps), metricValue => timeMapper(metricValue, this.video) ) ) ) );
Let's break this down line by line.
1) We create a new class property and assign the metricValue$ observable as value with some transformations via lettable operators. Let’s say we get the values 0, 20, 80 and 50 with one second in between each value.

2) We pipe the scan operator in order to access the previous metricValue as we get a new value. Then we return an array with the previous value at index 0 and the next value at index 1. The result would be [0, 20], [20, 80], [80, 50] and finally [50,0].

3) We then pipe switchMap to allow the metric to always switch to its latest value even if the inner observable emits at a later time. By destructuring the array mentioned previously, we can name its values based on index position.
4) We return a range observable of 60 values from the the previous metric value to the latest metric value. For example, if the meditation level goes from 20 to 80, the range will roughly be: [20, 21, 22, 23, … , 77, 78, 79, 80]. The length of the range is 60 so the transition runs at 60 frames per second (fps). For the range operation, we’ll use a library called linspace that does exactly what we are looking for.

5) We spread out the range observable and emit its values over a period of one second.
6) We transform metricValue to it’s relative value in seconds since we plan to bind this observable to the currentTime property of the video.

Here's the full component class.
import { Component } from '@angular/core'; import { fromEvent } from 'rxjs/observable/fromEvent'; import { interval } from 'rxjs/observable/interval'; import { from } from 'rxjs/observable/from'; import { switchMap, scan, map, zip } from 'rxjs/operators'; import * as io from 'socket.io-client'; import linspace from 'linspace'; const clamp = metricValue => Math.min(Math.max(0, metricValue), 100); const timeMapper = (metricValue, { length }) => length * clamp(metricValue) / 100; @Component({ selector: 'mind-video-player', templateUrl: './mind-video-player.component.html', styleUrls: ['./mind-video-player.component.css'] }) export class MindVideoPlayerComponent { metricName = 'meditation'; video: any = { url: './assets/videos/flower.mp4', type: 'video/mp4', length: 5, fps: 60 }; stream$ = fromEvent(io('http://localhost:4501'), 'metric/eeg'); metricValue$ = this.stream$.pipe( map(({ eSense }) => eSense[this.metricName]) ); currentTime$ = this.metricValue$.pipe( scan(([, prev], next: number) => [prev, next], [0, 0]), switchMap(([ prev, next ]) => from(linspace(prev, next, this.video.fps)).pipe( zip(interval(1000 / this.video.fps), metricValue => timeMapper(metricValue, this.video) ) ) ) ); }
Lastly, we’ll switch to our component template file. As far as markup goes, we’ll only need a video element with its source element.
<video muted [currentTime]="currentTime$ | async"> <source [src]="video.url" [type]="video.type" /> </video>
At this point we can start binding the component’s class properties to the DOM properties of the elements in our template. This is where Angular really shines.
Once again, let’s go through it line by line.
1) We bind the currentTime DOM property of the native HTML5 video player to the currentTime$ observable and pipe it as async so Angular can handle the observable subscription for us. It works like magic.
2) We bind the video src and type properties of the video value in our component class to the DOM.
That’s all the UI code we’ll need for the experiment. You can find the complete project on GitHub.
The outcome
Now that we’ve put all the pieces together, let’s see how this works in practice. The following video shows real meditation levels with eyes closed in a quiet environment. This setting helps achieve better meditation results.
I’ve demoed this experiment in some meetups and conferences around the world. I’m always impressed on how easy it is for some people and how difficult it is for others to get high meditation and concentration levels. The human mind never seizes to amaze me.
I’ve been on a journey to connect the brain to the web. And as this journey continues, I’m excited about what’s to come. I look forward to seeing the amazing things we can do together with our minds and a little bit of JavaScript.