The current state of Brain-computer interfaces and their practical applications. Can we control all of the electronic devices with our minds?

The current state of Brain-computer interfaces and their practical applications. Can we control all of the electronic devices with our minds?

Brain-computer interface is a concept that can serve as a bridge for direct communication between the human brain and external electronic devices. Effectively controlling them with the power of our brains and without requiring any additional controls or inputs. This already has several applications in medicine, but the most interesting ones would come with smart homes and smart cities. Once those become more popular and widespread. We would be able not only to control every device in our home but interact with numerous devices that we come across in our daily life. On top of that, if it is coupled with the augmented reality glasses or a VR headset this opens the even greater potential for the practical applications. This can go even as far as replacing the smartphones, once the technology becomes more portable and affordable for the larger population.

A team of researchers from Carnegie Mellon University in collaboration with the University of Minnesota has achieved a breakthrough in the field of noninvasive robotic device control. Using a noninvasive brain-computer interface (BCI). They have managed to develop the first-ever successful mind-controlled robotic arm exhibiting the ability to continuously track and follow a computer cursor.

There is a large field of potential applications for the ability to non-invasively control robotic devices with the use of our thoughts. The most direct ones would be in the field of medicine. Particularly helping the patients with movement disorders and paralyzed patients.

BCIs have been shown to achieve good performance for controlling robotic devices using only the signals from the brain implants. “These implants require a substantial amount of medical and surgical expertise to correctly install and operate, not to mention cost and potential risks to subjects, and as such, their use has been limited to just a few clinical cases.”[1] Therefore, the grandest challenge for BCI is to develop less invasive and preferably noninvasive technology. This would, first of all, allow paralyzed patients to control their environment or robotic limbs using their own thoughts.  It would not only bring this technology to numerous patients but also to the general population.

Bin He, Trustee Professor and Department Head of Biomedical Engineering at Carnegie Mellon University is achieving a significant breakthrough with his team: "There have been major advances in mind controlled robotic devices using brain implants. It's excellent science," says He. "But noninvasive is the ultimate goal. Advances in neural decoding and the practical utility of noninvasive robotic arm control will have major implications on the eventual development of noninvasive neurorobotics."[2] With the use of novel sensing and machine learning techniques He and his team managed to achieve access to signals deep within the brain. Achieving a high resolution of control over a robotic arm. Accomplished with non-invasive neuroimaging and a novel continuous pursuit paradigm. He managed to overcome the noise from EEG signals leading to a significant improvement of EGG-based neural decoding that facilitated real-time continuous 2D robotic device control. The team has published a paper: “Noninvasive neuroimaging enhances continuous neural tracking for robotic device control,” It shows that their unique approach has enhanced the BIC learning speed by nearly 60% for the typical tasks and improved the continuous tracking by over 500%.

Apart from medical applications, there are already some early commercial uses for this technology. Including the ability to fly a drone using only thoughts and improving the authentication of identity with the use of ‘passthoughts’. This can be done with BCIs that can fit into inside an earbud and can pick up thoughts with enough reliability.

BCIs can have an impact on other devices as well. Facebook, for example, has recently acquired CTRL-labs, a brain-computer interface startup for over 1 billion USD. They specialize in wrist-worn hardware that picks up nerve signals from the skin using EMG and translates them into movement signals that can be used to control digital devices.

Gaming companies have a significant interest in BCIs as well. It is coupled with the rise of VR popularity. the technology can go as far as being able to measure the moods and attention levels. In addition to potentially removing the traditional game controllers.  

Transcranial direct current stimulation (tDCS) is another technology that goes hand-in-hand with BCIs. It is largely used for improving cognition by both medical and military spheres. Designed to carry the electrical currents from and to the brain “They're touted as both ways of dealing with medical conditions like fibromyalgia and migraine, as well as boosting users' memories and mental sharpness if coupled with 'brain-training' protocols.”[3] Those can enhance both physical and mental performance.

Another interesting application of BCIs is the ability to connect the brain to the internet in real-time. By turning the brainwaves into an Internet of Things (IoT) node on the World Wide Web. Illustrated in the experiment of Adam Pantanowitz (lecturer in the Wits School of Electrical and Information Engineering) – Brainternet. “Brainternet is a new frontier in brain-computer interface systems. There is a lack of easily understood data about how a human brain works and processes information. Brainternet seeks to simplify a person’s understanding of their own brain and the brains of others. It does this through continuous monitoring of brain activity as well as enabling some interactivity,”[4] explains Pantanowitz.

It works by converting electroencephalogram (EEG) signals (brain waves) into an open-source brain live stream. This is done through wearing a powered, mobile, internet accessible Emotiv (EEG) device for a prolonged period. This Emotive then transmits the EEG signals to a Raspberry Pi – a credit card-sized small computer which then live streams the signals to an application programming interface (code that allows software programs to communicate), and displays data on a website that acts as a portal. “Ultimately, we’re aiming to enable interactivity between the user and their brain so that the user can provide a stimulus and see the response. Brainternet can be further improved to classify recordings through a smartphone app that will provide data for a machine-learning algorithm. In future, there could be information transferred in both directions – inputs and outputs to the brain,”[5] says Pantanowitz.

Curiously, BCIs can also be used for hands-free, thought-controlled music creation. "I am a musician and neurologist, and I've seen many patients who played music prior to their stroke or other motor impairment, who can no longer play an instrument or sing," says Thomas Deuel, a neurologist at Swedish Medical Center and a neuroscientist at the University of Washington. "I thought it would be great to use a brain-computer instrument to enable patients to play music again without requiring movement."[6] This Encephalophone collects brain signals through a cap that transforms specific signals into musical notes. It works together with a synthesizer, allowing the user to create music using a wide variety of instrumental sounds. It is fairly easy to control as well: "We first sought to prove that novices -- subjects who had no training on the Encephalophone whatsoever -- could control the device with an accuracy that was better than random," says Deuel. "These first subjects did quite well, way above chance probability on their very first try."[7] It is based on brain-computer interfaces using an old method, called electroencephalography, measuring the electrical signals in the brain.

Overall this shows that there is a significant perspective for the BCIs. Especially if they are created on a smaller, portable scale. And obviously, if they won't require any kind of surgery. It can be used as an ultimate security device, allowing access to everything from PCs to cars. It also can be used to directly interact with the multitude of devices. Just imagine interacting with your PC at home, while you are driving your car and getting the required information from it. Or accessing the cloud networks at your work-place while sitting at home. The whole idea of it would be to seamlessly connect us to the internet and through it to all of our devices, where we would be able to function as a node in a greater network. And coupled with advancements in VR and hopefully with the advancement in augmented reality we would have an interface to go along with it. Allowing us to interact as if we were using a PC any time and in any place. The key part here would be developing a portable device with enough sensitivity to capture the brainwaves. Something that can be used by an average consumer without requiring a complex setup.

The above-mentioned concept of the Internet of things is very interesting in itself. It refers to the countless devices around the world that are all connected to the internet. These devices can effectively communicate between each other without even requiring a human presence. The point of BCIs here would be to establish a direct communication pathway between the human brain and an external device. Thus, eliminating the need for a typical information delivery method. “Recent trends in BCI research have witnessed the translation of human thinking capabilities into physical actions such as mind-controlled wheelchairs and IoT-enabled appliances”[8] BCI should become a major aiding technology in human-device interaction. However, current cognitive recognition systems can achieve about 70-80% accuracy, which is not sufficient to design practical systems.

In the paper titled: “Internet of Things meets Brain-Computer Interface: a unified deep learning framework for enabling human-thing cognitive interactivity” the team of scientists proposes a unified Deep Learning (DL) framework for enabling human-thing cognitive interactivity. “Our framework measures the user’s brain activity (such as EEG, FNIRS, and MEG) through a specific brain signal collection equipment. The raw brain signals are forwarded to the cloud server via Internet access. The cloud server uses a person-dependent pre-trained deep learning model for analyzing the raw signals. The analysis results interpreted signals could be used for actuating functions in a wide range of IoT applicants such as a smart city (e.g., transportation control, agenda schedule) smart hospital (e.g., emergency call, anomaly mentoring), and smart home (e.g., appliances control, assistive robot control). This framework aims to interpret the subject’s intent and decode it into the corresponding commands, which are discernible for the IoT devices. In the paper, they propose a weighted average spatial Long Short-Term Memory (WAS-LSTM) to exploit the latent correlation between signal dimensions. Their proposed framework is capable of modeling high-level, robust and salient feature representations hidden in the raw human brain signal streams and capturing complex relationships that exist within the data.

In their experiments, they have used an Emotiv EPOC+ headset, which collects EEG signals and sends the data through a Bluetooth connection. The raw EEG signals are then transported to the server through a TCP connection. The device in itself is portable and demonstrates the most ergonomic design from anything that can be found. Not to mention that it is available for purchase by any consumer. Even though later iterations of such devices will definitely get smaller and more portable, a well-designed working product is a great example of the current state of technology.

 

Thank you for reading the article. Also, please, check out our merchandise at our home page:https://astara-store.com/ Or by clicking images below.

 

Have a great day!

[1] https://www.technologynetworks.com/informatics/news/first-ever-non-invasive-brain-computer-interface-developed-320941

[2] https://www.technologynetworks.com/informatics/news/first-ever-non-invasive-brain-computer-interface-developed-320941

[3] https://www.zdnet.com/article/the-weird-future-of-brain-computer-interfaces-replacing-passwords-with-thoughts-and-mind-reading-bosses-who-can-tell-when-you-are-bored/

[4] https://neurosciencenews.com/brain-internet-connection-7489/

[5] https://neurosciencenews.com/brain-internet-connection-7489/

[6] https://www.sciencedaily.com/releases/2017/07/170712110510.htm

[7] https://www.sciencedaily.com/releases/2017/07/170712110510.htm

[8] https://www.researchgate.net/publication/324860881_Internet_of_Things_Meets_Brain-Computer_Interface_A_Unified_Deep_Learning_Framework_for_Enabling_Human-Thing_Cognitive_Interactivity


Leave a comment

Please note, comments must be approved before they are published