The Brain Is Now an Interface. Is Product Design Ready?
Source: Unsplash Somewhere in a hospital room this year, a paralyzed man named Brad Smith typed a message using only his thoughts. Three l...
Source: Unsplash
Somewhere in a hospital room this year, a paralyzed man named Brad Smith typed a message using only his thoughts. Three layers of technology sat between his brain and the words that appeared on screen. And not one product designer in the world was in the room when they built those layers together.
The brain-computer interface era is here, and it is happening faster than most product teams realize. In 2026, Neuralink announced it will move to high-volume production of its brain implant devices, with a nearly fully automated surgical procedure. The global BCI market is now valued at $3.2 billion, growing at 16.7% annually, and the industry raised over $1.6 billion in venture capital in 2025-2026 alone. Brain-computer interfaces are no longer a science fiction concept or a research lab curiosity. They are a product category. And that means they are now a design problem, whether the engineers building them acknowledge it or not.
"The future of brain-computer interfaces is not just a technical challenge. It is a collective responsibility, and if we innovate wisely, ethically, and compassionately, we can ensure that progress in neuroscience strengthens what it means to be human."
— World Economic Forum, Global Future Council on Neurotechnology, January 2026
That quote from the WEF is easy to nod along to. But it papers over a more uncomfortable truth: the product design community has largely been absent from this conversation. And that absence is going to cost us.
The Watershed Moment Nobody Designed For
Elon Musk posted on X that Neuralink's 2026 production push includes a new surgical approach where device threads go through the dura without needing to remove it. He called it "a big deal." He is correct. But the bigger deal, from a product standpoint, is what happens after the surgery.
The device is now inside a human skull. The patient goes home. They have to learn to use it. What does that onboarding look like? What are the error states? What happens when the signal degrades? What does the user do when the cursor jumps to the wrong place, or the neural pattern shifts because the person is tired or emotional or sick?
These are not engineering questions. These are design questions. And right now, nobody with a UX background seems to be asking them loudly enough.
I spent time earlier this year writing about what it means to build AI-native products, on Medium, and one of the core mindset shifts I kept coming back to was this: the interface is not the screen anymore. With BCIs, we have arrived at the logical endpoint of that idea. The interface is the nervous system itself. And we have no established design patterns for that.
Brad Smith's Setup Is the Most Interesting UX Architecture I Have Seen
Brad Smith is the third person to receive a Neuralink implant, announced in April 2026. He has ALS, is completely paralyzed, and relies on a ventilator to breathe. The implant has 1,024 electrodes that capture neuron firings every 15 milliseconds. Signals from his tongue movement turned out to be the most effective for cursor control, and jaw clenching worked best for clicking.
But here is the part that stopped me when I read it. To communicate, Brad Smith is not just using the implant. He is running three systems simultaneously:
- The Neuralink implant: Reads neural signals and translates them into cursor movements on a screen. Raw neural data processed into machine input.
- Grok AI: Elon Musk's AI chatbot suggests responses and helps Brad construct messages faster. The AI fills in the gaps where the neural input is too slow or imprecise.
- ElevenLabs voice clone: Trained on recordings of Brad's voice from before he lost the ability to speak, it reads his written words aloud in his own voice, giving him back a form of verbal presence.
That is a three-layer interface architecture sitting between one human brain and the outside world. Neural input, AI mediation, and synthetic voice output. And none of those three layers were originally designed to work together. They were stitched together after the fact, by necessity.
This is exactly the kind of fragmented, multi-system UX architecture that product designers are supposed to untangle, and in the BCI space, no one has formally taken on that role yet.
Think about the failure modes in that stack. What happens when Grok suggests the wrong thing and Brad accidentally confirms it before catching the error? What does "undo" look like when your input method is a thought? What happens when the voice clone mispronounces something, or the emotional tone feels wrong? These are real product problems. They are happening right now, in the real life of a real person.
Blindsight: What Happens When You Have to Design Vision Itself
Neuralink is preparing to begin human trials of its Blindsight chip in 2026. The premise is remarkable: an external camera captures visual input, wirelessly transmits it to an implant, which then directly stimulates neurons in the visual cortex. The goal is to restore some form of vision to people who are completely blind, including those born blind who have never had a visual experience at all.
The early version will produce low-resolution vision. Research from the University of Washington has already flagged that electrical stimulation of the cortex does not produce clean "pixels." It produces complex, sometimes distorted visual perceptions that vary from person to person. Every brain is different. Every perception is subjective.
So here is the design problem: you are building a feedback system where the output is literally what the user sees. There is no screen to iterate on. There is no A/B test. The experience happens inside someone's visual cortex, and every user's cortex is wired differently.
What does calibration look like for this product? What is the mental model you give a user who was born blind and has never processed visual information before? How do you write onboarding copy for someone learning to interpret electrical signals as spatial awareness for the first time? These questions are not hypothetical anymore. They are on Neuralink's product roadmap for this year.
I have worked in healthcare UX, and even in that space, where the stakes are high and the users are vulnerable, the design process is often bolted on at the end. In BCI, that approach is not just bad practice. It is potentially dangerous.
The $3.2 Billion Market With a Design Vacuum
Let us zoom out for a moment. The BCI market does not sit in isolation. It is part of a broader neurotechnology push that is attracting serious capital. According to industry reports, the market is projected to reach $13.86 billion by 2035. The current 16.7% compound annual growth rate puts it on a trajectory that mirrors where wearables were around 2014-2015, right before they exploded into mainstream consumer products.
Neuralink is not the only player. Synchron has been implanting its Stentrode device in patients since 2021. Precision Neuroscience is developing a thinner, more flexible cortical array. Blackrock Neurotech has been in the clinical space for over a decade. Chinese competition is also intensifying, with multiple state-backed BCI programs accelerating research timelines.
What none of these companies have publicly shown is a design team with the same profile as the engineering team. The job boards tell the story. Search for BCI roles in 2026 and you find neuroscientists, electrical engineers, embedded systems developers, regulatory affairs specialists. You find almost no senior product designers, no UX researchers, no interaction designers.
That vacuum will get filled eventually. The question is whether it gets filled proactively, by designers who choose to move into this space now, or reactively, after a high-profile UX failure forces the industry to pay attention.
What Product Teams Should Be Thinking About Right Now
I am not saying every product designer needs to pivot to neurotech tomorrow. But I do think there are specific principles that should be on the radar of anyone building products that will intersect with BCI systems in the next five years. And that intersection is closer than you think: think about healthcare apps, assistive tech, enterprise software that will one day need to accept neural input.
- No universal design patterns exist yet: Every brain is different. The mental models, interaction patterns, and feedback loops we rely on in screen-based UX do not translate. Designers entering this space will need to build new frameworks from scratch, through real research with real users.
- Error recovery is the central design challenge: In a BCI interface, an accidental input is not a misclick. It is a misfire from the nervous system. The undo model, the confirmation model, the error state model all need to be rethought completely.
- Adaptive interfaces become mandatory, not optional: Neural signals change based on fatigue, emotion, medication, and neuroplasticity. A BCI interface that works perfectly on day one may behave differently on day 100. Building for drift and adaptation is not a feature request. It is a baseline requirement.
- The onboarding problem is unlike anything we have solved before: You are teaching a person to use their own brain differently. That requires pedagogy, patience, and feedback loops that most product teams have never had to design.
- Multi-system orchestration is the norm, not the exception: Brad Smith's setup, three layers of technology working in tandem, is not a one-off. It is a preview of how BCI products will actually be used. Designing for the full stack, not just the implant in isolation, is where the real UX work lives.
The Consent Interface Nobody Built
Let me sit on the most uncomfortable part of this for a moment. Neural data is unlike any other form of personal data. It is not your location history or your purchase behavior. It is the electrical activity of your thoughts. In some configurations, it includes signals that the user did not intentionally produce.
The World Economic Forum's Global Future Council on Neurotechnology flagged this in January 2026, pointing out that BCIs raise "profound questions about privacy and informed consent in accessing neural data." That is the polite way of saying: we are building systems that can read things from people's brains that those people never chose to share.
What does consent UX look like for that? A terms and conditions page is obviously inadequate. A one-time onboarding screen is obviously inadequate. Consent for neural data access needs to be ongoing, granular, revocable, and legible to a non-technical user who is dealing with a significant medical situation on top of learning a new technology.
This is a solved problem in other high-stakes contexts. Healthcare apps have worked through consent frameworks. Financial services have built layered permission models. But nobody has combined the intimacy of neural data with the complexity of an AI-mediated communication system and designed the consent layer properly.
Over at reloadux, we wrote about how AI is reshaping customer expectations in SaaS products. One of the patterns that keeps coming up is that users are increasingly aware of what data is being collected, and increasingly intolerant of systems that obscure it. Apply that trend to neural data and the stakes become very, very high, very fast.
My Take: This Is the Most Important UX Problem Nobody Is Working On
I have been designing products for over eight years. I have worked across healthcare, fintech, enterprise SaaS, and emerging AI tools. And I say this with full awareness of how it sounds: brain-computer interfaces are the most consequential UX problem of the next decade, and the design community is mostly asleep to it.
The engineers are moving fast. The capital is flowing. The clinical trials are happening. And the product design discipline is sitting on the sidelines, waiting to be invited.
We should not wait to be invited.
The skills that make a good product designer, systems thinking, empathy research, edge case mapping, feedback loop design, error state architecture, these are exactly the skills that BCI products desperately need. The challenge is that applying them requires learning new domains: neuroscience basics, clinical trial constraints, regulatory requirements, the actual biology of how neural signals work and degrade.
That is a lot to learn. But so was learning to design for voice interfaces in 2015, or for mobile in 2010, or for AI-native products in 2023. Every major platform shift creates a gap between what the technology can do and what good design practices exist for it. The designers who close that gap early are the ones who define the field.
The BCI gap is open right now. The question is who walks through it.
What do you think? Are there designers already doing serious work in the BCI or neurotechnology space that you know about? And how do you see the role of product design evolving as interfaces move beyond screens? Drop your thoughts in the comments below. I would love to hear how you are thinking about this.
Sources:
1. Applying AI — Neuralink's 2026 Breakthroughs and Market Dynamics — https://applyingai.com/2026/04/transforming-brain-computer-interfaces-neuralinks-2026-breakthroughs-and-market-dynamics/
2. Fox News — Neuralink to start high-volume brain implant production in 2026 — https://www.foxnews.com/health/elon-musk-shares-plan-mass-produce-brain-implants-paralysis-neurological-disease
3. Fox News — Paralyzed man with ALS is third to receive Neuralink implant — https://www.foxnews.com/health/paralyzed-man-als-third-receive-neuralink-implant-can-type-brain
4. Toward Healthcare — Brain Computer Interface Market Report — https://www.towardshealthcare.com/insights/brain-computer-interface-market
5. BCI Intel — State of BCI: 2026 Annual Industry Report — https://bciintel.com/state-of-bci-2026/
6. World Economic Forum — Responsible Development of Brain-Computer Interfaces, January 2026 — https://www.weforum.org/stories/2026/01/how-we-can-achieve-the-responsible-development-of-brain-computer-interfaces/
7. IEEE Spectrum — Neuralink's Blindsight Implant Won't Deliver Natural Sight — https://spectrum.ieee.org/neuralink-blindsight
8. MIT Technology Review — This Brain Implant Gets a Boost from Generative AI — https://www.technologyreview.com/2025/05/07/1116139/this-brain-implant-gets-a-boost-from-generative-ai/