At Google’s Pier 57 offices in New York overlooking the Hudson River earlier this month, I had the future in my hands — and on my face. I wore wireless glasses with a display in one eye that could project Google Maps onto the floor in front of me, show me Uber updates, and automatically recognize and translate languages spoken aloud. I could understand a conversation in Chinese.
I tried another pair of glasses, connected by cable to a phone-like puck. This pair could run apps in front of me, just like a mixed-reality VR headset. I could connect with a PC, click on floating cubes with my hands and play 3D games. It was like a Vision Pro I could carry in my jacket pocket.
That future is upon us. You’ll be able to try out those glasses for yourself in 2026.
But those two very different stylings — one everyday and subtle, one more like a tiny AR headset — are just a glimmer of what’s coming.
My desk is covered with smart glasses. A pair of large black frames that show me a color display in one eye and that have a neural wristband I can use to relay commands. A regular-looking set of Ray-Bans that play music and take photos.
Then there’s the pair of black glasses that have lenses I can snap in, with green monochrome displays and ChatGPT integrated. And the thin glasses that have displays and a companion ring, but no speakers. And the glasses built to assist my hearing.
To watch movies or do work, sometimes I plug a completely different set of glasses that can’t work wirelessly at all into my phone or laptop with a USB cable.
Smart glasses are the biggest new product trend as we cross the halfway mark of the 2020s. Glasses with smart features may conjure up visions of Tony Stark’s eyewear, or those world-scanning glasses in the Marksman movies, and that’s exactly what most big tech companies are aiming for.
“What we talked about originally, when we brought up the vision of this platform, was the old Iron Man movies where Tony Stark has a Jarvis that’s helping him,” Google’s Android Head, Sameer Samat, tells me. “That’s not a chatbot interface — that’s an agent that can work with you and solve a task in the space that you’re in. And I think that’s a super exciting vision.”
But it’s taken a long time to get here, and the vision is still coming into place. Over a decade ago, Google Glass sparked debates about social acceptance, public privacy and “Glassholes.” In a review back in 2013, I wrote: “As a hands-free accessory, it can only do so much, and it doesn’t mirror everything I can see on my phone. In that sense, I currently feel the urge to go back to my phone screen.”
While the tech has advanced a lot in the last 12 years, smart glasses still face that same challenge.
At least now they’re finally becoming functional, less cumbersome and regular-looking enough to live up to their never-ending hype. They’re probably not everything you’d expect, and many have significant tradeoffs and drawbacks. But what they can do is astonishing. And a little bit scary.
The capabilities and features vary widely, but all have one thing in common. They aim to be what you want to wear, ideally every day and all day long. They could well become constant companions like your earbuds, smartwatch, fitness band and wellness ring, and as indispensable as your phone.
Are you ready for that?
So, so many smart glasses
Today’s explosion of smart glasses is reminiscent of the early 2010s, when dozens of different watches and bands were all trying to find a way onto our wrists, from the early Fitbits to the first stabs at smartwatches like the Pebble and Martian. The question back then was whether we’d really end up wearing something like this on our wrists all the time. The answer turned out to be an emphatic yes.
Now the push is to figure out computing on your face. Those in the hunt include a litany of everyday names in the consumer tech and eyewear sectors, from Meta, Google, Samsung, Amazon, Snap and TCL to EssilorLuxottica, Warby Parker and Gentle Monster.
Smart glasses are starting to find their footing. Meta’s Ray-Ban glasses went from a weird, creepy novelty when they arrived in 2021 to something I regularly take on vacations, and even wear half the time. Companies like Nuance Audio make FDA-approved hearing-aid glasses that are already in stores. But the biggest movers haven’t arrived — Google and Samsung are next on deck, and Apple could be announcing glasses next year, too.
What’s still lacking is a concise definition of what “smart glasses” actually are. Even Samsung and Google have subdivided the category into a number of product types, ranging from phone-tethered, sometimes-on visors to completely wireless glasses. Some smart glasses just have audio assistance, like earbuds, and others add cameras. Some have displays, but what they’re used for — and the quality of the display — can vary widely. Some show notifications from your phone. Some browse apps. Some can act as viewfinders for your on-glasses camera. Some can do live captioning.
As companies try to conjure up super-glasses that can do it all, we’re seeing a whole lot of experimentation. It’s something that will no doubt be a big theme at CES in early January. Smart glasses are also being positioned as the ultimate gadget for tapping into AI, the massively disruptive, ever-shifting technology that Big Tech can’t get enough of.
But there are still mundane, yet essential, factors that need to be addressed, like battery life, display quality, size and comfort. Plus how information gets delivered from the phone, questions of accessibility, privacy, function and social acceptance. And how exactly will they fit in with the phones, earbuds and watches we’re already using?
Sorting all that out is what the next 12 months are all about. Let’s dive in.
AI: The glue and the reason
I’ve spent a lot of time walking around my neighborhood wearing a large pair of glasses, looking at things around me and wiggling my fingers to interact with a band on my wrist. Meta’s Ray-Ban Display glasses are showing me answers to my questions. I’m getting pop-up text responses to things it’s taking little pictures of using the frame’s camera. It’s call and response, as Meta AI attempts to help me on the fly.
This is what most of the glasses-making big tech companies are dreaming of — smart glasses as a wearable assistant, equipped with audio, a miniature display and a handful of connected apps and AI tools.
At Meta’s Menlo Park headquarters in September, I spoke with CTO Andrew Bosworth about the company’s big, unfinished push to make true AR glasses that blend 3D imagery and advanced interfaces. A year earlier, I’d tried Orion, Meta’s prototype with full and immersive 3D displays and the ability to track both my eyes and my wrist gestures. But that product isn’t yet ready for the mainstream — or affordable. Instead, we had this year’s Ray-Ban Displays, with a single full-color screen, no 3D and no extra apps, though it does have that wristworn neural input band to interpret hand gestures like pinches and swipes.
Bosworth foresees a spectrum of different-featured glasses, not one ultimate model.
“We are seeing strata emerge where there’s going to be lots of different AI glasses, platforms, AI wearables in general. And people are gonna pick the one that fits their life, or their use case,” Bosworth says. “And they’re not always going to wear the Display [glasses], even if they have them. They might sometimes prefer just having the [screen-free] Ray-Ban Metas.”
Meta’s smart glasses have been a success story, especially for partner EssilorLuxottica, which saw a 200% increase in sales of the Ray-Ban Metas in the first half of 2025, with over 2 million pairs of glasses sold. Those numbers are nowhere near the sales of smartphones or even smartwatches, but for the first time, there are signs of growth. (That’s for Meta’s screen-free glasses, which have cameras, audio and AI. The more expensive Displays only just came out in September.)
Meta’s entire lineup of smart glasses has live AI modes that can see what I’m seeing and respond to my voice prompts. It’s a very mixed bag, though. Often, I find the suggestions unhelpful or the observations slightly off — it misidentifies a flower, or it guesses at a location, or hallucinates things that aren’t there.
While a long-term goal for AI is to develop “world models” of what’s around you, using that to help map and understand your environs, right now AI on glasses is just doing quick spot-checks of photos you take or things it hears through microphones. Still, it’s the closest way that AI can come to really observing your life right now, which is why Meta and Google see glasses as the ultimate AI doorway, even as a variety of pins, rings and pendants compete to be the AI gadgets of choice.
The big new catchphrase to keep an eye on is “contextual AI,” which refers to the hoped-for stage when AI will be able to recognize what you’re doing and meet you more than halfway. How? By understanding where you are or what you’re looking at, similar to the way a search engine knows your browsing history and stores cookies to serve up ads, or your social media has an all too eerie sense of what you’ve been up to.
The best preview of how things could work is inside a new VR/mixed-reality headset, the Samsung Galaxy XR, which has been perched on my face for the last few months. It can see everything I’m seeing and use that to fuel Gemini, Google’s AI platform. But in Galaxy XR, I can circle to search something in my space, ask Gemini what’s on my desk or get it describe a YouTube video.
Samsung and Google are leaning on the bulky and not-very-glasses-like Galaxy XR to explore how they can bring “live AI” to actual glasses soon. Warby Parker and Gentle Monster smart glasses coming next year are going to lean on camera-aware AI just like Meta does, but with a lot more possible hook-ins to Google services and to other apps — like Google Maps and Uber — that live on phones.
“Our goal is to go beyond the world of assistance that’s on demand, and more to a world where it’s proactive, and that requires context. Your personal assistant can’t act in a proactive way without context of you and what’s going on around you,” Google’s Samat says.
Samat sees XR, or extended reality — the mix of virtual reality, augmented reality and your actual real-world environment — as fertile ground for that to take root.
“There’s a less established interface … so it’s a perfect opportunity to define something new, where the personal assistant is an integral part of the experience,” Samat says. “And the system has a perfect view into what you are seeing and hearing, so that connection of context is made easier.”
But the more advanced glasses get, the more they’ll need more complex ways to control them.
Wrists: Gestures start here
Meta’s Ray-Ban Displays have an extra that points toward the future of glasses like a big flashing arrow. A neural band on my wrist, looking like an old-school screenless Fitbit, is studded with sensors that measure electrical impulses and turn my finger gestures into controls.
But a dedicated band isn’t the only way to register hand gestures. Smartwatches could be used as glasses controls, too. Samsung and Google — both of which have their own smartwatch lines — see this as an opportunity, and not just for gestures.
Putting more features on smart glasses means creating space for them. On a pair of glasses you’re meant to wear all the time, that’s not easy. Space is severely limited, and weight limits are unforgiving. Ray-Ban Displays are passably fashionable, but even Bosworth admits Meta lucked out that chunky glasses are in right now. They’re big by necessity. Batteries, display projectors, speakers, processors, cameras — they all need to be tucked in there.
Smart glasses can be really good at being headphones, projecting audio from small speakers in the arms, or taking phone calls using an array of directional microphones. But some don’t have audio at all.
Even Realities is choosing to leave features out. The company’s G2 glasses have monochrome displays and microphones, but forgo speakers and cameras. That could be a plus for people who don’t like the idea of a camera on their face. It also helps Even Realities push for smaller sizes and better battery life. I was impressed that the G2 glasses look remarkably thin, even with for two small bulges on the ends of the arms.
Nuance Audio, an assistive glasses manufacturer, takes another approach by focusing entirely on medically cleared hearing aid technology, plus long battery life. Size isn’t an issue; they look like a regular pair of glasses.
But the components could shrink further. I got a look at extremely small speakers on custom semiconductor chips made by xMems Labs that, in demos, sounded as good as everyday headphones. These smaller chips could shrink the arms of audio-equipped smart glasses, says Mike Housholder, vice president of marketing for xMEMS. They could also offer cooling, since these little solid-state speakers are basically tiny air pumps.
The goal for the weight of smart glasses seems to be between 25 and 50 grams, the range of what non-smart glasses weigh. Nuance Audio felt confident its 36-gram size fits what a standard pair of glasses should weigh; the G2 glasses from Even Realities weigh the same. xMEMS quoted me a similar size goal for smart glasses. Meta Ray-Ban Displays tip beyond this, at about 70 grams, while the display-free Ray-Ban smart glasses are around 50 grams.
Meanwhile, expectations keep increasing for what a pair of smart glasses should have in the first place.
Something like a true Tony Stark pair of augmented reality glasses would be super bulky — witness Meta’s full-featured, eye-tracking-equipped, 3D display-enabled Orion prototype — but there’s hope the tech will keep shrinking. A pair of TCL RayNeo X3 Pro glasses I just started testing feels heftier and more “techie” than most smart glasses, yet at around 80 grams is also relatively compact. And that’s with dual displays and 3D graphics, plus cameras onboard.
The stubbornest challenge for any smart glasses that want to be stylishly sleek and lightweight? Battery life. Some glasses that are light on features — Nuance Audio Even Realities — last a full day on a charge. Meta’s Ray-Bans have gotten to six hours or more, its more computing-intensive Ray-Ban Displays only last a couple of hours, and its live AI modes, which tap into continuous camera connection, conk out after an hour at most. Snap’s full-AR Spectacles, a developer model for glasses expected next year, currently only last 45 minutes.
There are a lot of compromises at the moment, but a full day of use seems like the necessary goal post.
Assistive dreams and lens challenges
I’ll tell you my biggest worry: A lot of today’s VR headsets and glasses don’t work for everyone who wears prescription eyewear. I have pretty severe myopia and also need progressive lenses for reading. I’m around a -8. It turns out that’s sort of a breaking point for a lot of current smart eyewear and headsets, whose lenses tend to max out near -7.
VR headsets have started offering a wider range of prescription inserts, but smart glasses are another story. Meta’s Ray-Bans don’t officially support eyes beyond +6/-6, although I’ve fitted a higher-index set of lenses into mine. The more advanced Ray-Ban Displays only support a range of +4/-4, largely because the new waveguide technology can’t accommodate it yet.
But there are signs of hope. Even Realities supports a much wider range of prescriptions up to -12/+12, and so does Nuance Audio. Other smart glasses manufacturers are leaning on inserts. I use pop-in lenses on the Xreal and Viture display glasses and TCL RayNeo X3 Pro glasses, and magnetic clip-on lenses on Rokid glasses. The result is sort of weird, but at least functional.
I’m hopeful more prescription support is around the corner.
Schott’s Sprengard tells me it’s entirely feasible to make higher-index lenses with more advanced waveguides like Meta is using. “The technical complexity to solve eye correction is rather straightforward compared to the challenges to making [our] waveguide.”
There are basic safety issues, too, especially when you’re in motion, whether on foot, on a bike or in a car. Smart glasses with displays often throw images in front of your eyes at random times — potentially dangerous distractions. While most let you turn off the displays or switch to driving mode, they’re not on by default.
Also, on your phone, you can choose which AI apps to install or whether to install them at all. But with smart glasses, you’ll likely be locked into a single, unavoidable AI.
There need to be more options to let people select what AI services to add or remove, and phone controls to better manage how they’re collecting and sharing data. And it all needs to be clearer and better laid out. I currently manage smart glasses via piecemeal phone apps with hidden device settings and confusing relationships to limited phone hook-ins, like Bluetooth or location-sharing toggles. It’s squirrely, even for a seasoned tech reviewer like me.
One big problem is that phone-makers like Apple limit the ways glasses can connect with phones. Google is trying to break down those barriers with Android XR, which Even Realities’ Wang describes as a work in progress.
“All the services we’re providing still need to be run on the [phone] app, so the app always needs to be running in the background,” he says. “If you kill the app, you kill the brain of the glasses.”
If smart glasses are ever going to end up on more faces, it can’t feel this haphazard, this weird to set up and connect. Smartwatches figured it out. Glasses can, too.
“I hope, and think, that as the smart glasses industry evolves, there will be platforms or standards,” Wang says.
Where smart glasses go next
That demo I did just week ago, when I put Google and Xreal’s Project Aura on my face, I saw how far glasses could go. A Windows PC monitor floated to the left of me, a YouTube video on the right. I multitasked, running apps side by side, scrolling and clicking with taps of my fingers in the air. Then I loaded up Demeo, a 3D role-playing game for VR, which floated in the room in front of me as I used my hands to pick up pieces and play cards from my hands.
Project Aura is a testbed for how glasses could replace VR and mixed-reality headsets, and maybe all our big screens, too. A pair of folding glasses and a phone-sized processor can run everything. Much like Meta’s Project Orion, they’re true augmented reality. While they can’t be worn on your face as everyday glasses all the time, and they don’t work with your phone yet, they’re another step toward that moment.
“Maybe in three to five years, you pull out your phone and then you connect your glasses with it, and you have a brand new kind of experience,” Xreal’s founder and CEO, Chi Xu, says.
That future is making its way toward us. In a kitchen at Snap’s New York headquarters this fall, I got a peek at software dreaming up how AI could start offering live instructions overlaid on our world. I saw step-by-step instructions, drawn and typed in the air over a coffee machine and a refrigerator: in-glasses generative AI assistance in live graphic form.
Bobby Murphy, Snap’s CTO, tells me he envisions blocks of swappable AI tools that could let people create on the fly, making custom mini-apps Snap calls Lenses — something beyond what today’s apps can do.
Snap, which has made smart glasses for years, is aiming for its next-gen consumer pair of AR smart glasses to go on sale next year. CEO Evan Spiegel says these glasses will be something you can wear everywhere, which is great — but the prototype developer glasses I tested still only have a 45-minute battery life.
But one thing’s clear: By the end of 2026, we’re going to see a lot more smart glasses — in the shops where we buy our everyday glasses, on the faces of fashion models and influencers and in the praises of people who find them essential as assistive tools. We’ll be trying them out as portable movie theaters, vacation glasses or personal wearable cameras.
Still, as I look at the glasses scattered across my desk, I can’t help remembering the long path of smartwatches, those days of excitement over wearables made by Misfit, Jawbone, Pebble and Basis.
Many of them are gone now.
Will it be the same with smart glasses? Probably so. But the companies that survive will have figured out how to make high-tech eyewear that I’ll really want on my face all the time, that I’ll be able to wear all the time. With my prescription. Without needing constant recharging.
Pebble founder Eric Migicovsky wears Meta’s Ray-Bans as sunglasses — and takes them off when he goes inside. “Meta Ray-Bans are great, but everything else is not even at smartwatches in 2014.”
We’re not there yet. But I think we’re getting awfully close.
Visual Design and Animation | Zain bin Awais
Art Director | Jeffrey Hazelwood
Creative Director | Viva Tung
Camera Operator | Numi Prasarn
Video Editor | JD Christison
Project Manager | Danielle Ramirez
Editor | Corinne Reichert
Director of Content | Jonathan Skillings
(function() {
window.zdconsent = window.zdconsent || {run:[],cmd:[],useractioncomplete:[],analytics:[],functional:[],social:[]};
window.zdconsent.cmd = window.zdconsent.cmd || [];
window.zdconsent.cmd.push(function() {
!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘set’, ‘autoConfig’, false, ‘789754228632403’);
fbq(‘init’, ‘789754228632403’);
});
})();
