Our Cyberpunk Future: Brain Computer Interfaces
Exploring the plausibility of neural interface tropes
A reader recently suggested I write about plausible futurology, after my critical take on mind uploads. So here I'm going to write about the sci-fi concept that has the best combination of plausibility and crazy-future-vibes: Brain Computer Interfaces (BCIs).
I'm going to be loosey-goosey with terminology here and use a vibes-based definition of Brain Computer Interface (there's a lot of gray area). If you want a rundown on terminology in the BCI space, and to read more about BCIs from someone who actually works in the space, see
’s writing.BCI Tropes
Let's talk about a few sci-fi tropes first. In the comics, Iron Man controls some of his suits with a neural implant. Doc Ock from Spiderman has robot arms connected directly to his nervous system that he can control like limbs. These are all examples of signals going from the brain out into a system that interprets them.
Another common trope is various forms of sensory augmentation: bionic eyes that allow greater visual acuity or allow the user to see infrared or ultraviolet. These require an implant that sends a signal to the brain, and the brain has to interpret.
Finally, there are examples of characters connecting their brains directly with an interactive system to interact directly with some other world. Think of The Matrix where characters "jack in" or Neuromancer where connecting directly with a cyberdeck allows the user to navigate a digital world. These are the most complicated, since 1) they involve sending and receiving data from the neural implant, and 2) the data that would need to be sent would be really sophisticated.
My claim here isn't that these depictions are plausible. But variations on them are plausible enough that I think it's fun to think through where this tech could go given enough time.
BCIs in the real world: When do I get my mind-controlled car?
It must be some kind of requirement that every neurotech startup have a demo where someone controls a small drone "with their brain". I've seen or heard about half a dozen of these demos. If you believe the hype, we're on the cusp of doing away with all these clunky analog input devices like mice, keyboards, and steering wheels, and finally being able to just think what we want to happen and have it happen.
Despite all the demos, I'm still writing this using my meat hands typing on a physical keyboard. It turns out it's been nearly 50 years since the first demonstrations of humans using EEG to control a cursor well enough to navigate a maze. If we were capable of that way back then, why can't we do more now? Are BCIs all hype?
The answer is: it's complicated. There are reasons we haven't gotten better at non-invasive BCIs, which I'll talk about below. But it's important to note there actually has been an enormous amount of progress with BCIs, just in ways that aren't obvious to the average person.
You've probably seen a cochlear implant, even if you didn't realize it's a Brain Computer Interface. They directly stimulate the cochlear nerve, transmitting sound information to the central nervous system as a digital signal. The first successful cochlear implants were in the 70's, but they have improved dramatically over time, making them increasingly common. Between 2015 to 2019, their incidence increased by almost 60%.
There's a shorter history of giving folks with paralysis the ability to control robotic limbs using a neural interface. In 2005, Matt Nagle became the first person to control a robotic hand. This has improved immensely since then—Nathan Copeland can not only control a robotic arm but thanks to electrical pulses sent through the electrodes also gets a sense of touch from his implants (he also has a pretty funny Twitter account).
Finally, in vision, the FDA has approved a device for sending visual signals to the tongue, allowing the blind to see.
This last example might be a bit of a stretch as a Brain Computer Interface. It isn't directly hooked up to nerves. This leads to some murky gray area—what actually counts as a Brain Computer Interface? Technically, I can interact with the internet using signals from my brain. My brain just has to send an impulse down my nerves, into the muscles and tendons in my fingers, depressing keys on a keyboard or moving a mouse. My computer screen then sends electromagnetic radiation in a specific pattern that activates nerves in my retina and transmits the visual information back into my brain. Obviously that isn't what we mean by BCI. Let's ignore this and return to our vibes-based understanding of a BCI: something is a BCI if it feels cyberpunk.
Regardless of what you think of the tongue-seeing device, there are really cool optogenetic approaches that let blind people see that definitely count since it involves direct stimulation of (genetically altered) nerves in the retina (read
’s excellent writeup if you're interested). Now that's cyberpunk.One thing that makes these approaches possible is the adaptability of the brain. These examples, even cochlear implants, require some amount of learning. When receiving a signal, the brain has to learn how to make sense of those signals. This takes time, and the more complex the signal the more time it takes. Similarly, it takes time to learn to control an arm or other prosthetic using your brain. Your brain slowly maps out the firing of specific neurons and the movement of an arm until it can do it relatively smoothly.
This adaptability is my source of optimism about BCIs. The brain is good at learning (duh). The downside is pretty much any BCI is going to require learning, which is inherently limiting. But we can learn to use a wide range of signals in a wide range of ways.
The bigger limitation is the interface we use to interact with the brain.
The Interface part
The reason we haven't progressed past the crappy "control a drone with your brain" demos is because of how hard it is to get a good signal out of the brain.
Those demos typically use EEG, which means placing electrodes on the scalp to read the electrical activity of brain cells. Because the electrodes are placed on the scalp, the signal needs to go through the hair, skin, muscle, skull, and dura mater.
Because the electrodes are being placed so far from the brain cells they are recording the activity of, the signal is very noisy. It's dampened and distorted by all the bone and tissue in between. More electrodes give more signals to work with, and using gel helps with conductivity to improve the signal somewhat, but these aren't things most people want in a consumer product. I'm not going to goop my hair every time I want to play video games with my brain. And I don't particularly want so many electrodes that I look like this guy:
Even with the most extreme EEG set up, you're going to be very limited with what you can do with the signals. If all you want is to have halting, crappy, partial control of a drone, cursor, or pong board, sure, EEG can get you there. But you won’t be controlling robot arms as fluidly as Doc Ock.
The people that are controlling robot arms use incredibly invasive implants. Electrode arrays are positioned on the brain, with electrodes penetrating millimeters in. Let me be clear: this requires literal brain surgery. Since it's being implanted directly in an organ, there is a risk of tissues responding in ways that impede the device's ability to get a signal (for example, scarring can occur). This is risky stuff.
These implanted arrays are getting better and safer. They've come a long way in the last decade or so, and I expect they'll continue to get better. But I can't see a world soon where we're all undergoing brain surgery to get robot arms. The path to a commercial product isn't through brain chips.
Cool tech without brain surgery
Getting a signal out of a brain is hard. Brains are delicate, so people frown on sticking stuff into them. They're encased in bone, making it tough to read signals out of them. While researching for this article, I learned there is a middle-ground between electrodes placed on top of the scalp and electrodes placed directly in the brain.
The approach is endovascular: go through the circulatory system. By entering through the jugular, an implant can be placed in the blood vessel right next to the motor cortex. The implant itself is based on stents, which are routinely used to treat narrowing coronary arteries. Hence the name: stentrode.
These bad boys are currently in human trials with ALS patients. The FDA granted breakthrough designation status to the device in 2020, and researchers published initial results from the trials in 2023. The trials showed stentrodes can be used for simple computer tasks, like online shopping, texting, or banking. They can produce pulses of electrical signal, so they not only receive information from the brain but can send signals back.
I don't know how the signal quality compares to what you get with electrodes placed directly into the brain. I would guess it's quite a bit worse. The trouble with any device is you are averaging electrical activity over a large number of neurons, or recording just a tiny sample of individual neurons. Meanwhile, individual neurons in the brain receive high-fidelity signals from thousands of other neurons. Regardless of how invasive we are, we're always going to be limited.
Still, I would say the stentrode would be my top pick for a cyberpunk neural interface using our current technology. It does still mean undergoing a surgery, and there are risks, but this feels at least a bit more comfortable than brain surgery (but that's not saying much—it does still mean having something stuck into your jugular).
There is still progress being made on less invasive and safer neural implants. I doubt we'll get to where a high fidelity neural implant doesn't require any kind of surgery, but maybe it will become a fairly routine surgery. If it was routine enough, maybe there would be general consumer uses that go beyond medical cases. Let's assume we solve the problem of getting a good read of brain activity with a simple implantable device. Then what can we do?
What might actually be possible?
Other than needing to read the brain signal, the other major obstacle to overcome is the learning process. Implanted devices are just reading whatever neural population they end up next to. The brain then needs to learn the mapping between the activity of these neurons and some outside outcome. This also goes for an implant sending signals into the brain—you have to learn what those signals mean. There is a long iterative process to actually using this. The more complex the signal, the harder it will be for the brain to learn to use it. The idea that you can just stick a chip in the brain and immediately have it used for a wide range of applications isn't plausible.
If we wanted a cyberpunk future, I think we could get to a world where we all have Doc Ock arms we controlled with brain implants. I don't know why we would do that, we would have to think of some good use cases for them beyond looking cool and being super villains, but let's leave that to the product managers.
We also could plausibly have various forms of super sensing—there's no reason the same tech addressing blindness couldn't be used to cure us of our "blindness" to the ultraviolet and infrared spectrum. But again, there would need to be some actual use case for this to be invested in for the mass market and then widely adopted.
I'm less certain we will get implants that let us interact in complex ways with the internet. Being able to move a cursor or select words/letters if you're an ALS patient is one thing. Being able to write faster with your brain than with your hands is another. Because of the dimensionality of language (think of how many words there are), you would need the brain to have a common format with whatever interface it has. Word embeddings provide a semantic space that would be useful here, and presumably a computer could learn that space and train you on its space. While this feels possible, it's speculative, and even if it were possible, I would be surprised if we could get something that works better than typing on a keyboard for a healthy person.
Implanted memories, another common neural mod trope, are unlikely. Memories are distributed through the brain in a complex way, and different for each individual. You could implant a chip that had information on it, and through training learn to read off that information, but it wouldn't be a true memory. It would just be information you could access. You would be better off reading.
As far as implanting new skills (Neo's "I know kung fu"), these are also distributed, different for each individual, and different for each skill. Maybe we could have chips or drugs that aid learning new skills somehow, but I don't think a plug-and-play skill chip is plausible.
We'll have to make do with bionic eyes and Doc Ock arms. We just need to come up with some reason it would be worth undergoing surgery to get them (do you think we could convince everyone the surgery is worth it for the cyberpunk aesthetics?).
Thanks for the shout out @tommy!
I love the approach from a science fiction angle starting w the classic tropes- frames the whole BCI discussion around the use cases and potential applications!
Nice piece - thank you for this!
What do you make of the sci-fi capability to read each other's thoughts, or feel each other's feelings with BCIs?