Source: Unsplash
Neuralink just made headlines again. This time, it is not about a patient controlling a cursor with their thoughts. It is about a new surgical robot that can reach any part of the brain. Combined with the company's push toward mass production in 2026, something big is shifting. And almost nobody in the product design community is paying attention to what it actually means.
Brain-computer interfaces are no longer science fiction. In 2026, the global BCI market is valued at $3.33 billion, with projections putting it at $13.86 billion by 2035, growing at a CAGR of 16.77%. Neuralink has moved from single-digit implants to over 20 patients as of February 2026, with a target of several hundred by year end. The BCI industry raised over $1.6 billion in VC and institutional funding in 2025 and 2026 alone. But here is what is bothering me: while the engineering community races to solve the neuroscience problems, nobody is seriously asking the design questions. When the interface IS your brain, every UX principle we have built over the last 30 years starts to fall apart.
"To set expectations correctly, the vision will at first be low resolution, like Atari graphics, but eventually it has the potential to be better than natural vision and enable you to see in infrared, ultraviolet or even radar wavelengths, like Geordi La Forge."
— Elon Musk, on Neuralink's Blindsight implant, posted on X
The BCI Gold Rush Is Real, and It Is Moving Fast
Let me put the scale of what is happening right now in perspective. In May 2026, Neuralink announced its next-generation surgical robot can now reach any part of the brain, not just the motor cortex. That is a massive technical leap. The robot integrates high-precision micromanipulators, optical coherence tomography guidance, and AI-based fault detection, with six degrees of freedom capable of sub-100-micron accuracy. This is the kind of precision that makes mass production of brain implants not just possible, but practical.
Meanwhile, the broader BCI ecosystem is exploding. Synchron completed its COMMAND early feasibility study with 6 out of 6 patients meeting the primary safety endpoint. Merge Labs raised a $252 million seed round in January 2026. Science Corporation closed a $230 million Series C in March 2026. Total disclosed BCI investment since 2018 now exceeds $3 billion. These are not research grants. This is venture capital betting on a near-term commercial market.
And Neuralink's Blindsight, the device that bypasses damaged eyes entirely and writes directly to the visual cortex, is now in human trials. The FDA granted it Breakthrough Device Designation in September 2024. The new S2 chip inside Blindsight has 1,680 stimulation channels, a 67% increase over the previous generation. The technology is maturing faster than anyone expected three years ago.
I follow this space closely because I have spent eight years designing digital products. What I keep seeing is a familiar pattern: engineers sprint ahead, business cases get funded, and design gets bolted on at the end when someone complains the product is confusing to use. That pattern is catastrophic in consumer software. In a device implanted in your brain, it becomes a qualitatively different category of problem.
The "Atari Graphics" Problem Is Actually a UX Problem
Musk's quote about "Atari graphics" is technically accurate. Early Blindsight users will experience phosphenes: tiny flashes of light that the brain interprets as rudimentary images. Think blocky, pixelated, low-contrast blobs of perception. Now think about this from a design perspective.
How do you set user expectations for that experience? How do you onboard someone to a new sensory channel? How do you help a person who has been fully blind for 20 years mentally calibrate what they are "seeing" through a 1,680-channel brain stimulation device? These are interaction design problems that no standard UX toolbox knows how to solve, because nothing in our collective design knowledge prepared us for them.
In traditional UX, we solve unclear interfaces by iterating on visuals. We do usability testing. We watch where people click. We adjust typography and contrast ratios. None of those methods map to an experience where the "display" is the user's visual cortex. There is no A/B test for phosphene patterns. There is no user interview that can capture why one stimulation pattern feels more intuitive than another to someone who has never experienced vision before.
I wrote about the need for AI-native design thinking on Medium earlier this year, specifically about how adaptive interfaces need to stop assuming a static, predictable user. BCIs take that challenge and multiply it by a factor of a thousand. Every user's brain is different. Every neural pathway is shaped by that specific person's life experience. The "interface" is not just different on every device. It is different inside every skull.
Five UX Assumptions That Break When the Interface Is Your Brain
After thinking through this from a product design perspective, here are the five core UX principles that completely fall apart in the BCI context:
- Feedback loops require observable output: In every interface we design, users take an action and see a result. Click a button, something happens. In a BCI, the "action" is a thought, and the feedback might be a sensation, an electrical signal, or a change in brain state. Designing clear feedback when the input and output channels are both internal is a problem we have no good frameworks for yet.
- Error states need to be visible: When a user enters a wrong password, we show a red border and an error message. When a brain implant miscalibrates, what happens? The user might experience confusion, disorientation, or simply get no response at all. There is no 404 page for a brain implant. Error communication has to be rethought from the ground up.
- Cognitive load is not a design variable, it is the substrate: All of UX design tries to reduce cognitive load. But when the interface IS the cognitive system, "reducing cognitive load" means something completely different. You cannot add whitespace to a thought. You cannot simplify a neural stimulation pattern the way you simplify a form.
- Personalization requires a stable user model: Every adaptive interface we design assumes a relatively stable user with consistent preferences over time. But brains change. They rewire through experience, injury, and aging. A BCI calibration that works today may not work in six months as neural pathways shift. Designing for continuous biological adaptation is a challenge the SaaS world has barely scratched the surface on.
- Consent and control are qualitatively different: In software, consent is a checkbox. You can always close the app. In a brain implant, the device is always present, always active to some degree. The design of consent mechanisms, opt-out flows, and user control has to grapple with a level of intimacy that goes well beyond cookie banners and notification settings.
What Product Designers Should Be Thinking About Right Now
You might be thinking: "I design SaaS dashboards. BCIs are not my problem." I would push back on that. Hard.
The principles that BCI designers are being forced to figure out right now will ripple through all of product design. Ambient interfaces. Invisible inputs. Adaptive systems that personalize at the individual neural level. These ideas are already entering mainstream UX through eye-tracking, voice interfaces, and gesture controls. BCIs are just the extreme end of the same spectrum.
At reloadux, we have been writing about how enterprise SaaS is moving from static, form-based UIs to AI-driven interfaces that adapt to user behavior in real time. BCIs are the logical endpoint of that trajectory. The interface dissolves. The experience becomes invisible. And when that happens, the quality of the underlying design philosophy matters more, not less, because there is no visible UI to rescue a broken interaction model.
Specifically, product designers should start building mental models around three emerging principles:
First, calibration as onboarding. In BCIs, every user needs a personalized calibration phase before the device works properly. This insight applies to AI-native products broadly. The first interaction is not about demonstrating features. It is about learning the user. The sooner product teams accept that, the better their adaptive experiences will be.
Second, graceful degradation at the biological level. Traditional software degrades gracefully when a feature fails: you show a fallback state. BCIs have to degrade gracefully when signal quality drops, when the brain rewires, or when the device needs recalibration. Designing for biological variability is the next frontier for resilient product systems.
Third, trust as a continuous design output, not a one-time UI pattern. BCIs cannot afford to break user trust once and then win it back through a redesign. The intimacy of the interface means trust has to be earned continuously, through consistent behavior, transparent communication about what the device is doing, and giving users genuine control at every step.
The Design Playbook We Do Not Have Yet
Here is the hard truth: the design community is woefully unprepared for what is coming. Non-invasive BCI is expected to account for 58% of BCI market revenue in 2026, driven by consumer applications in gaming, focus enhancement, and AR/VR integration. These are not medical devices in sterile hospital settings. These are consumer products that designers will be building at scale, for regular people, in the very near future.
And right now, there is no standardized UX framework for brain interfaces. There is no equivalent of Material Design, or Nielsen's heuristics, or WCAG guidelines for neural input. The academic community has started exploring it, but the gap between research papers and shipping products is enormous and closing fast.
I do not think any single designer or team will solve this in the near term. But the designers who start thinking seriously about these questions now will have a significant advantage when the consumer BCI market breaks open, which, based on current funding and regulatory trajectories, looks like it could happen before 2030.
The question I keep coming back to: what does "intuitive" mean when there is no interface? What does "friction" mean when the input channel is a thought? What does "user error" mean when the user and the interface are the same thing?
I do not have clean answers yet. But I think asking these questions now, before the market matures and design debt starts accumulating inside people's skulls, is exactly the work the product design community should be doing.
What do you think? Is the design community ready for brain-computer interfaces, or are we about to repeat the same mistakes we made with mobile, voice, and AI? Drop your thoughts in the comments below. I would genuinely love to hear from people working in medtech, neuroscience, or consumer hardware, because this conversation needs more practitioners in it.
Sources:
1. Neuralink Mass Production Plans 2026 — technology.org/2026/01/02/musks-neuralink-gears-up-for-mass-brain-implant-manufacturing-in-2026
2. State of BCI: 2026 Annual Industry Report — bciintel.com/state-of-bci-2026
3. Brain Computer Interface Market Size 2035 — precedenceresearch.com/brain-computer-interface-market
4. Neuralink's next robot could reach any part of the brain (May 2026) — yourstory.com/2026/05/neuralink-surgical-robot-brain-interface-2026
5. Brain-Computer Interfaces: Interactions at the Speed of Thought — uxmag.com/articles/brain-computer-interfaces-interactions-at-the-speed-of-thought
6. Neuralink Blindsight — rathbiotaclan.com/blindsight-how-neuralink-plans-to-restore-vision-by-writing-directly-to-the-brain
7. The Missing Interface: Designing Trust into a Robotic Future — roboticstomorrow.com/article/2025/08/the-missing-interface-designing-trust-into-a-robotic-future/25322
8. Brain-Computer Interface Statistics 2026 — media.market.us/brain-computer-interface-statistics