We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

The Promise and Challenges of Brain–Computer Interfaces

 A digital 3D rendering of a human face composed of glowing blue and orange particles, representing artificial intelligence and brain-computer interface technology.
Credit: iStock.
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 5 minutes

The evolution of brain–computer interfaces (BCIs) has brought dramatic possibilities for neurofeedback, communication and medical interventions over the last 50 years.


Locked-in syndrome, Parkinson’s disease and paralyzed patients’ lives can now be improved with emerging technologies that connect neural signals directly with computational hardware and software. Currently, the market for BCI is divided between two disparate classes of technology: non-invasive and surgically invasive. These technologies have afforded amazing results thus far, dramatically improving the lives of their users and setting the groundwork for BCI to become a revolutionary technology over the next decade.

 

Ultimately, though, these two methods have fundamental limitations that will inhibit the broad-scale adoption of BCI beyond purely medical need. Non-invasive BCI, while relatively easy to implement, lacks sufficient data bandwidth. Surgically invasive BCI provides exceptional results but brings with it all the cost and risk associated with surgical procedures, rendering non-medical adoption difficult. The future of BCI, then, depends on new, groundbreaking methods that fill the gap between these pathways, leveraging the best of both technologies while avoiding their inherent pitfalls.

Non-invasive BCI technology: Electroencephalogram vs functional near-infrared spectroscopy

Presently, the most promising candidates for commercial, at-home BCI technologies are those that are entirely non-invasive and read-only, including electroencephalogram (EEG), functional near-infrared spectroscopy (fNIRS) and secondary tools that measure autonomic functions.


For EEG interfaces, substantial progress has been made since the first human EEG recording in 1924 in the areas of signal processing, miniaturization of hardware and ease of use. Though somewhat limited to caps that include dozens of electrodes, and electrodes that require gel-coupling to the user's scalp, a flourishing open-source biohacking community is emerging in the world of EEG, with high fidelity robust use cases such as BCI spellers, rehabilitation and robot control.


Some companies position themselves as an at-home EEG-based sleep analyzer, a more passive but still intriguing use case for EEG-BCI. Numerous established and start-up companies are looking to enter the market by miniaturizing and optimizing form factors that might lower the friction for everyday use. Additionally, a more medical-leaning but game-changing use case has been deep research into using EEG for the identification of various brain-activity-based biomarkers.


Promising biomarkers thus far include those for epilepsy, Parkinson’s, Alzheimer’s, traumatic brain injury, psychiatric conditions such as depression, anxiety, schizophrenia and ADHD, as well as sleep (insomnia and narcolepsy) and developmental disorders (autism and cerebral palsy). It’s likely that numerous and robust markers will continue to emerge, ushering in a new era of brain health, especially when combined with “passive” health monitoring applications for stress management, focus/attention/fatigue monitoring, sleep analysis and meditation feedback/enhancement.


fNIRs the optical imaging of blood oxygenation, has also emerged as a non-invasive BCI option, with applications including rehabilitation, music imagery, and binary communication. fNIRs technology benefits from its ability to measure signals without liquid coupling to the scalp, as well as a slightly more robust signal in the presence of motion artifacts. If the optical and processing components of these systems could be further miniaturized into an everyday-use form factor, fNIRS could capture a more prominent role in ubiquitous BCI.

EEG and fNIRS: Promising biomonitoring, but limited bandwidth

Notwithstanding the aforementioned potential, EEG and fNIRs both suffer two flaws that have limited their widespread adoption on tech store shelves: the signals they read are mostly limited to superficial cortical structures and the tasks they can be used to perform are limited by lack of signal fidelity.


While these barriers to common use could one day be overcome by yet unknown technological advances, currently it is hard to imagine an everyday user going through the process of loading gel, donning a cumbersome cap studded with dozens of electrodes, and logging in to a system that offers the user relatively little reward for their efforts for EEG. The lower-friction EEG devices, which offer fewer channels in a sleeker form factor, are not yet at the use-case level to make them ubiquitously desirable and suffer greatly from weakness to movement artifacts.


Meanwhile, fNIRS, which does not require liquid coupling to the scalp, faces the fundamental limit that blood oxygenation as a signal is just slow; time series activity happens on the order of seconds (rather than the millisecond timescales at which neurons operate) and blood oxygenation is a secondary readout of neural activity, rather than a direct electrical one like in EEG.


Furthermore, a basic fNIRs system complete with a backpack full of electronics can be quite expensive – in the hundreds of thousands of dollars. At these prices, the use cases would have to be tremendously valuable. As of today, they don’t seem to be, particularly with the latent dynamics of reading secondary signals. Thus, while their improvement and approval as medical devices will be important in paving the way towards at-home use, as of today, both modalities show a need for innovation in non-invasive brain readout.

Invasive BCI technology: Transformative results, but complicated installation

An important tool that has garnered much attention in the last half-century is invasive, implanted technology that lives adjacent to neural tissue.


Devices such as multi-electrode array (MEA), deep brain stimulation (DBS), electrocorticography (ECoG) and stereo EEG (sEEG) have made incredible strides driven by their need in the medical neurotech world. While each of these approaches has slight differences, they all share a commonality in circumventing the scalp’s tendency to dilute neural signals and accessing much more powerful spatial and temporal information about neural activity in specific brain regions of interest.

 

It should be noted that unlike noninvasive read devices such as EEG and fNIRS, invasive technologies can provide bidirectional opportunities to record and stimulate neural activity, although not always simultaneously. Invasive BCI devices and the brave patients who undergo their implantation are crucial to paving the way toward our understanding of just how far we might be able to get in connecting neural signals and computers.

 

DBS has been used as a treatment for alleviating tremors since 1987. It has also been used in closed-loop configurations for bradykinesia, as well as refractory epilepsy monitoring and intervention. ECoG implantations, which have taken on various shapes and sizes during ever-improving form and function, have been used in people to spell words, two dimensional cursor control and even decoding covert speech.


And, although less common than DBS and ECoG, signals from patients with SEEG implants have also been shown to decode three different hand gestures, speech perception and navigation distance.

 

MEAs, which can record spike information and band power, have provided some of the most remarkable BCI implementations to date since the invention of the Utah Array in 1997. Soon after, researchers demonstrated that cortical local field potentials could be used for cursor movement and flexion of a cyber digit. Several groups have used MEAs for neuroprosthetic speech decoding or typing, and recently MEAs have even been combined in a bi-directional manner to both control and receive tactile feedback from neuroprosthetic arms to improve their function.

 

The trade-off here, which precludes their adoption for everyday users, is that these devices require highly invasive brain surgeries and implantations. This may come at a relatively low “cost” for those who medically require them but are intractable for otherwise healthy people. However, for those who face life-altering movement, communication and sensory issues, these invasive BCI technologies offer a promising future for restoring independence. Still, given the regulatory pathways and invasive procedures that these technologies and treatments require, they are far from ubiquitous adoption in medicine, and likely forever inaccessible to common consumers. Together, this motivates the need for developing a BCI technology that can proximally access neural tissue without demanding invasive, complex procedures reserved for neurosurgeons and their most high-priority patients.

The need for a new generation of neurotechnology

While the promise of BCI is clear, the widespread adoption of current approaches is hampered by several factors. For non-invasive approaches, low spatial and temporal resolution as well as poor signal-to-noise ratio limits bandwidth and data transfer. Although invasive approaches can address some of these challenges, surgical complications, accessibility to deeper brain regions and high costs prohibit this technology from becoming widely accessible to a majority of individuals.


We need novel technologies that can offer non-invasive, highly sensitive bidirectional (neural reading and modulation) capabilities and access to deeper, information-rich brain regions. This approach will significantly enhance the adoption of BCI and accelerate the synergy between human cognition and technology that could dramatically impact healthcare, mental health, physical capabilities and daily life.


The field of BCI offers lofty and futuristic possibilities: instantaneous communication, downloading thoughts, recording dreams and merging consciousness with artificial intelligence. But the pathway to this grand vision is still obscured.