Smart Glasses Just Beat the Headset Dream: Why Meta Won 82% While Apple Vision Pro Stalled
Source: Unsplash The most expensive headset in the world ships fewer than 50,000 units in a quarter. The cheapest pair of smart glasses sh...
Source: Unsplash
The most expensive headset in the world ships fewer than 50,000 units in a quarter. The cheapest pair of smart glasses ships seven million in a year. The market just told us what shape the future of computing actually is, and most product teams are still building for the wrong one.
Meta now owns 82% of the global smart glasses market according to Counterpoint Research's H2 2025 data, while Apple Vision Pro sales reportedly collapsed to roughly 45,000 units in Q4 2025 with the company cutting marketing spend by an estimated 95%. This is not a story about price points or chip generations. It is a story about what designers and product teams keep getting wrong when they confuse capability with usability. The quiet winner of the spatial computing war is the form factor that disappears, and the lessons here go far beyond hardware. They reshape how AI features should ship inside every SaaS product in 2026 and beyond.
"It was clearly only a matter of time until all those flip phones became smartphones."
Mark Zuckerberg, Meta Connect keynote, September 2025
The Numbers Nobody Wants to Talk About
Let's start with what the data actually says, because the framing matters. Counterpoint Research published its H2 2025 smart glasses tracker in early 2026 and the numbers are not subtle. Global smart glasses shipments grew 139% year over year in the second half of 2025. Meta's share inside that booming market climbed to 82%. Inside the smart glasses category itself, AI-enabled models accounted for 88% of all units shipped, while basic audio-only smart glasses fell to just 12%. The category has not just grown. It has consolidated around AI, and it has consolidated around one company.
Now compare that to the headset side of the spatial computing story. The same period saw the broader AR/VR headset market shrink 14% year over year. Hypebeast and a handful of supply chain reports tracked Vision Pro through Q4 2025 and the picture is brutal. Luxshare reportedly halted Vision Pro production lines in early 2025. Apple has not expanded Vision Pro to any new countries since the original launch. The device still sits at $3,499. Most of its roughly 3,000 apps are enterprise tools or utilities. There has not been a price drop. There has not been a hardware refresh that changed the wear story.
Meta sold somewhere north of seven million Ray-Ban Meta smart glasses in 2025, and Zuckerberg has publicly said sales tripled that year, calling them "some of the fastest-growing consumer electronics in history." Meta is now reportedly aiming for 20 million units of annual production capacity by the end of 2026. The Display model launched in late 2025 at $799 bundled with the Neural Band wristband. None of this looks like a category that is losing.
This Is Not a Price Story. It Is a UX Story.
The reflex in the industry is to say Apple Vision Pro lost because $3,499 was too expensive. That reading misses the actual lesson. Meta's Display model is not cheap. At $799 it is the most expensive consumer pair of glasses many people will ever own, and it is still selling out in the markets where Meta has stocked it. The base Ray-Ban Meta starts at around $299 to $379, which is still smart glasses money, not earbuds money. Price is real but price did not decide this fight.
What decided it was wearability, social acceptability, friction, and the total cost of using the device through a normal day. Vision Pro asks the user to put on a 600+ gram visor, isolate themselves from everyone around them, run a tethered battery cable down to a pocket pack, and use a passthrough version of reality instead of reality itself. Smart glasses ask the user to wear glasses. That is the entire ask. The device weighs under 50 grams, looks like a normal pair of frames, and lets the user keep eye contact with humans and screens.
I wrote about this distinction in a Medium piece earlier this year on building AI-native experiences. The argument I keep coming back to is that the dominant interaction model for AI in the next decade will be ambient, not immersive. Vision Pro is the immersive bet. Smart glasses are the ambient bet. The market is voting on ambient by an enormous margin and most product teams have not internalized what that actually changes about their roadmaps.
What the Vision Pro Got Wrong, From a Designer's Bench
I have spent the last eight years designing enterprise SaaS products, AI-native flows, and conversational interfaces, and I keep running into the same anti-patterns when I look at visionOS. These are not rumors. They are visible in the developer documentation, in the existing app catalog, and in the way the device is positioned. Here is the short version.
- Form factor over function: A device that weighs more than a 13-inch laptop screen sitting on the front of your face cannot ever become "your main computer." That is a hardware reality that no software update can fix.
- Isolation by default: You cannot sit in a coffee shop wearing a Vision Pro and have a casual conversation. Passthrough is not presence. Smart glasses preserve eye contact, social context, and posture.
- Spatial UI built for nobody: Most third-party apps in visionOS are flat 2D windows floated in 3D. That is the worst of both worlds. It does not use the depth of the device and it does not match the speed of a 2D screen.
- A battery model from 2007: An external tethered battery pack is a friction the wearable category has already moved past. AirPods, Apple Watch, and Ray-Ban Meta all hide their power source. Vision Pro made the wire a feature.
- Developer ecosystem death spiral: Roughly 3,000 apps, mostly enterprise utilities, means there is no consumer pull. No consumer pull means no developer investment. No developer investment means no apps. The cycle has been visible since launch.
- No clear job to be done: Vision Pro is sold as a workspace, an entertainment device, a productivity tool, and a developer platform. It is not best in class at any of those four. Smart glasses have one clear job: be there, be invisible, be useful in glances.
The Neural Band Is the Real Story
The thing that gets buried in most of the coverage is the Meta Neural Band that ships with the Ray-Ban Display. It is an EMG wristband that reads the electrical signals from your forearm muscles when you move your fingers. Meta reports gesture recognition accuracy approaching 100% in normal use. You can keep your hand at your side, in your pocket, on your leg, and still input text and navigate menus. At CES 2026, Meta added handwriting input through the Neural Band, where you can write in the air on any surface and the glasses transcribe it.
This is the first new input modality with the right shape since multitouch arrived on the iPhone in 2007. It does not require the user to lift their arms. It does not require visual feedback from a screen. It does not draw attention from people around the user. It is invisible to bystanders. From a product design perspective, that combination of properties is what enables ubiquity. You can use it during a meeting. You can use it while walking. You can use it without anyone noticing.
Compare that to the Vision Pro's eye tracking and pinch system. Pinch works in private. It looks ridiculous in public. The same is true of the air typing keyboard. That is not a small detail. The willingness of the user to perform the gesture in front of other humans is a dominant constraint on whether a device gets used outside the home, and outside the home is where computing happens for most people now.
The next computing platform will not be the one with the highest resolution or the most spec horsepower. It will be the one that lets people forget they are using a computer at all. Smart glasses paired with an EMG wristband are the closest any company has shipped to that target.
The Three-Way Race That Is About to Start
Counterpoint Research held a public seminar on January 28, 2026, where senior research analyst Flora Tang and the team mapped out the next 12 months. Their take: the smart glasses market is about to shift from a single dominant player to a three-way race between Meta, Apple, and the Android XR alliance led by Google and Samsung. That shift starts in late 2026 and accelerates through 2027.
Apple's first entry, codenamed N50, is reportedly being tested in four different frame designs according to TechCrunch reporting from April 12, 2026. The first model is expected to ship without a display. It will be an AI-first audio device with cameras, microphones, and speakers, controlled by Siri and a touch surface. That is a more conservative bet than the Display model from Meta. Apple is choosing to ship the wearable first and add the visual layer later.
Meta is using its lead. The company has reportedly commercialized the EMG wristband technology beyond glasses and is exploring it as a general input layer for phones, AR, and assistive devices. That is a defensive move. If the wristband becomes the default input for ambient computing, Meta owns the equivalent of the touchscreen patent stack from the iPhone era.
Zuckerberg said on Meta's July 2025 earnings call that people without smart glasses may eventually be at a "significant cognitive disadvantage" compared to those who do wear them. That is marketing language, but the underlying claim has weight. If you have an always-available AI assistant that sees what you see and hears what you hear, the productivity and learning gap between users and non-users widens fast.
What This Means If You Are Designing Products in 2026
For product teams shipping anything in the next 18 months, the smart glasses takeover changes the surface area of the design problem. The interaction does not happen on a screen anymore. It happens in audio, in glanceable HUD information that lives in the corner of the user's vision, in voice queries, in subtle finger movements that are invisible to bystanders. If you are building a SaaS product, you are going to have to ask whether your data and workflows can be triggered, queried, and responded to without a keyboard or screen.
Most enterprise products are nowhere near ready for this. I wrote a piece for the reloadux blog earlier this year called "Is Your Product Ready for AI? A Practical AI Readiness Framework." The short version is that most teams are still arguing about where to put a chatbot button on a dashboard. They have not started thinking about the version of their product that has no dashboard at all. The version that lives inside an AI assistant. The version where the user never sees your UI, just hears the answer in their ear or sees three words floating in their glasses.
Here is the practical implication. The companies that build glasses-first interaction layers in 2026 will have a brief window of differentiation. Once the three-way race kicks off in late 2026, the design pattern library for ambient computing will lock in fast. The teams that show up with hands-on experience by then will be the teams that get acquired or funded. The teams still optimizing button radii on legacy SaaS dashboards will be reading about it.
The Career Note for Designers
If you are a UX or product designer reading this, the implication is direct. If your portfolio is mostly screen layouts, component libraries, and dashboard redesigns, you are working on the dying half of the platform. The career upside is in interaction design for ambient computing, voice UX, gesture systems, multimodal flows, and HUD information design that respects the user's attention budget. That last one is going to be a real specialization.
Designing for a 2D rectangle is a solved problem. Designing for a HUD that floats in the right corner of someone's vision while they are walking down a street, talking to another human, and listening to music is not solved. It is barely started. The people who figure out the patterns there will define the next decade of product work the way the iPhone HIG team defined the last one.
I keep telling designers I mentor that the next portfolio piece they should build is a glasses-first version of an existing SaaS product. Pick a product you use every week. Strip out the screen. Re-design the entire experience as voice, audio cues, and three lines of text in a HUD. That exercise alone will teach you more about ambient interaction design than any course on the market in 2026.
Final Take
The Vision Pro is not finished, and Apple has too much capital and patience to walk away. But the strategic narrative around spatial computing has flipped completely in the last 18 months. The headset bet is now the slow, expensive, niche bet. The smart glasses bet is the mainstream computing bet. Meta's 82% share is a result, not a forecast. The interesting question is which company gets to 82% of the next platform: ambient AI worn on the face and the wrist, with no screen between the user and the world.
For Info Planet readers in product, design, and engineering roles, the asks are simple. Stop assuming the screen is the canvas. Start prototyping for HUDs and audio. Read the Counterpoint reports when they drop. Watch what Apple ships in late 2026. And if you have a product team, run the glasses-first thought experiment on your own roadmap. The teams that take this seriously now will be the ones writing the playbook the rest of the industry copies in 2027.
What do you think? Are you betting on smart glasses or do you still think the headset category has another act left? Drop your thoughts in the comments below. I would love to hear how you are seeing this play out in your own work.
Sources:
1. Counterpoint Research, "Global Smart Glasses Shipments Grew 139% YoY in H2 2025; Meta Expanded Market Share to 82%" via counterpointresearch.com
2. Hypebeast, "Apple Vision Pro Faces Cuts as Spatial Bet Stalls" via hypebeast.com
3. TechCrunch, "Apple reportedly testing four designs for upcoming smart glasses" (April 12, 2026) via techcrunch.com
4. TechCrunch, "Mark Zuckerberg says a future without smart glasses is hard to imagine" (January 28, 2026) via techcrunch.com
5. Meta Quest Blog, "CES 2026: Meta Ray-Ban Display Teleprompter, Neural Handwriting" via meta.com
6. UploadVR, "Meta Ray-Ban Display Review: First Generation Heads-Up Mobile Computing" via uploadvr.com
7. Engadget, "Meta's EMG wristband is moving beyond its AR glasses" via engadget.com
8. Digitimes, "Smart glasses market transitions from Meta dominance to three-way showdown in 2026" via digitimes.com
9. CNN Business, "Meta Connect: Mark Zuckerberg unveils newest AI-powered smart glasses" via cnn.com
10. Ahmad Ullah on Medium, "How to Build AI-Native Experiences: 14 Mindset Shifts for Product Teams"
11. Ahmad Ullah on reloadux, "Is Your Product Ready for AI? A Practical AI Readiness Framework"