
Smart glasses have been “the next big thing” for about a decade now. Google tried it with Glass back in 2013, got laughed out of coffee shops, and quietly shelved the whole idea. Meta came back around with Ray-Ban smart glasses and actually made them work casual, wearable, genuinely useful for hands-free calls and quick AI questions. Now Google is back, and this time it looks like they actually figured out what these things are supposed to do.
At MWC 2026, Google showed off its Android XR smart glasses, and the demo that’s been making the rounds is worth paying attention to. A Google exec put on a pair of the glasses, pointed them at a group of people, took a photo, and then used a built-in AI tool called Nano Banana to drop the entire group in front of La Sagrada Familia in Barcelona photorealistically, on the fly, through the glasses. No phone pulled out, no app opened, no editing after the fact. That’s not a filter. That’s a fundamentally different way of thinking about what a camera can do.
For context, Meta’s Ray-Ban glasses have an AI image editing feature too, but it tops out at artistic re-styles, cartoonish looks, oil-painting effects, that kind of thing. Photorealistic editing in real time is something nobody has pulled off in a wearable before. That’s Google’s opening move here, and it’s a strong one.
The glasses come in two versions. The Audio-Only model skips the screen entirely; you ask questions, take photos, and get Gemini’s responses through a speaker, all hands-free. Think of it as AirPods with a camera and a much smarter assistant. The AR Display model adds a heads-up display directly in the lenses, showing turn-by-turn directions, live translation subtitles floating under a conversation, app notifications, and more. Both models need a paired smartphone to run fully, and importantly for a huge slice of the market both will work with iPhones, not just Android.
The Gemini integration is baked in everywhere. A touch-sensitive surface on the right temple wakes up Gemini Live. You can look at a landmark, ask Gemini to navigate there, and white directional text appears in the lens along with a Maps-style route visualization when you glance down. It’s the kind of interaction that sounds gimmicky until you see it working, at which point it starts to feel obvious.

The display hardware is genuinely impressive too. The monocular version uses microLED technology that Google developed after acquiring a company called Raxium in 2022 and up close, the resolution is sharp enough to look phone-like rather than the washed-out overlay you’d expect from AR glasses. Google also demoed a binocular version where each lens has its own waveguide display, capable of showing YouTube videos in native 3D with actual depth perception. That’s a long way from Google Glass’s tiny monochrome ticker.
Cross-device integration rounds things out nicely. Take a photo on the display-less model and a notification pops up on your paired smartwatch so you can preview it on a real screen. Third-party apps like Uber can push step-by-step navigation with images directly to the lens. There’s also Project Aura, a separate wired XR glasses concept built with startup XREAL, featuring a 70-degree field of view wide enough to overlay a recipe video while you cook or display repair guides anchored to whatever appliance you’re working on.
On the fashion side, Google is being smarter this time around. Rather than designing something that screams “tech prototype,” they’re partnering with Warby Parker and Gentle Monster, two brands people actually want on their face. Google has reportedly committed up to $75 million toward Warby Parker’s development and commercialization efforts alone, which signals this isn’t a side project. Samsung is also in the mix as a hardware partner and has promised its own Android XR glasses will arrive later this year, likely carrying the same Gemini-powered toolkit.
Google I/O 2026, starting May 19, is shaping up to be the next big moment for Android XR expect more hardware details, app ecosystem announcements, and possibly pricing.
Smart glasses have failed so many times that it’s easy to be cynical. But the MWC demo showed something different this time, not a device looking for a use case, but a device that knows exactly what it wants to be. If Nano Banana’s photorealistic editing holds up outside a controlled demo room, and if Gemini’s real-time responses stay fast and reliable in everyday conditions, Google might have just built the first smart glasses worth actually buying. Keep an eye on May 19.
Discover more from Phoonomo
Subscribe to get the latest posts sent to your email.
