For more than a decade, the promise of smart glasses has hovered somewhere between the plausible and the faintly ridiculous. When Google Glass arrived in 2013, the idea of information floating inside your field of vision felt thrilling, and, almost immediately, unsettling. The product never escaped its social awkwardness and was eventually withdrawn, but the underlying idea refused to disappear.
At this year’s Consumer Electronics Show in Las Vegas, that idea resurfaced in a more assured form. At a Meta store, we were given 35 minutes with the new Meta Ray-Ban Display AI glasses, free to explore them without constraint. It was enough time to understand that Meta is no longer chasing spectacle. Instead, it is trying to make smart glasses feel inevitable. Remember: Meta hasn’t announced a roadmap for the launch of Meta Ray-Ban Display AI glasses in India but here, you can buy Ray-Ban Meta Gen 2, Oakley Meta Vanguard or Oakley Meta HSTN.
At first glance, they look like ordinary Ray-Bans. That restraint is central to their appeal. Embedded within the right lens is a colour display that only the wearer can see. It appears when summoned and disappears when dismissed. No one across the table can glimpse it. There is no outward glow, no social signal that you are looking at something other than the world in front of you.
The display itself is crisp and legible. Transition lenses come as standard, allowing it to remain readable both indoors and outdoors. Even under bright sunlight, the interface holds up, reaching up to 5,000 nits of brightness in full colour. Ambient light, long the undoing of smart glasses, is no longer a deal-breaker. That alone marks a significant step forward.
What matters more, though, is what Meta chooses to do with the screen.
When gestures replace touchscreens
The most surprising element of the system is the Meta Neural Band that accompanies it. Worn like a fitness band, it uses electrodes to read electrical signals from the arm. Instead of relying on cameras to track exaggerated hand movements, the band interprets subtle muscle activity — tiny pinches, gentle swipes, a double tap of the fingers.
Adding a display plus Meta Neural Band unlocks a range of hands-free capabilities
Your hand does not need to be raised or even visible. It can rest by your side or behind your back and still work. Scrolling, selecting and adjusting volume happen almost without thought. After a short learning curve, the experience begins to feel less like operating a device and more like issuing a command
Meta has defined a limited set of gestures for now. But the potential is striking. This is not a game controller disguised as jewellery. It is an attempt to remove the physicality from interaction altogether.
The glasses themselves can also be controlled via a touchpad built into the frames, meaning the wristband is not strictly essential. Still, without it, much of the magic fades. The question becomes whether users are willing to wear an additional device solely to make the glasses feel complete.
Unlike earlier Meta Ray-Ban models, these glasses support a growing range of applications. Messaging sits at the centre of the experience. Instagram messages appear clearly in the display, as do WhatsApp and Facebook Messenger chats. Reading a message on glasses feels fundamentally different from reading it on a smartwatch. There is space. There is context. You are not squinting at your wrist.
Photos and videos captured by the glasses can be reviewed instantly, with digital zoom controlled by finger gestures. It is an unexpectedly powerful feature, particularly for first-person photography, allowing you to confirm a shot without reaching for a phone.
Music playback connects to Spotify and Apple Music, turning the glasses into a discreet audio controller. Yet the most compelling feature is live captioning. As someone speaks to you, their words appear on the display in real time. For people with hearing impairments, this could be transformative. For everyone else, it is a glimpse of a future in which assistive technology becomes ambient and universal rather than specialised and separate.
Navigation, too, is handled with restraint. The Maps app on the glasses offer a simplified directional arrow that shifts as you move. When dismissed, it reappears only when necessary. The experience feels less like following instructions and more like being quietly guided.
Some features remain experimental. One beta tool allows users to “write” by tracing letters on their leg with a finger, as though holding an invisible pen. The system attempts to convert the movement into text. It is easy to imagine this becoming useful in meetings or crowded spaces, though it is not yet reliable enough for everyday use.
Limits of a new interface
All of this is tied together by a companion Meta AI app. The presence of a display changes how Meta’s AI feels. Responses are no longer confined to audio or a phone screen.
There are some limitations. The app ecosystem is narrower than that of a smartphone. Apple’s iMessage and Siri are absent. Video calls are restricted to WhatsApp and Facebook Messenger.
The neural wristband introduces another compromise. It requires its own proprietary charger and does not replace a smartwatch or fitness tracker, meaning many users will end up wearing multiple devices.
Prescription support is also limited. This remains a significant barrier to adoption and an area where competitors are already moving faster. Meta will need to address this quickly if it wants these glasses to feel truly mainstream.
In hardware terms, the cameras are competent: 12-megapixel photos, 3K video and 720p slow-motion. What changes the equation is immediacy. Being able to see what you have captured, instantly and privately, makes the cameras feel more useful than their specifications suggest.
It was difficult to capture our own still photos of what the display looked like at the hands-on. This is an approximation. Picture: Meta
The broader context is impossible to ignore. Nearly every major technology company is trying to work out how artificial intelligence might live on the body. Apple is reportedly developing smart glasses of its own. Amazon is said to be exploring next-generation eyewear. Start-ups continue to proliferate. The race is no longer something on paper.
This moment feels reminiscent of the early smartwatch era, when devices were awkward, over-featured and unsure of their purpose. Over time, clarity emerged. Meta appears to believe that glasses — not watches — will become the next everyday interface.
What stands out most is how little of this feels gimmicky. Messaging on a smartwatch can feel cramped and performative. Messaging on glasses feels natural. Useful. Almost obvious.
For early adopters, the Meta Ray-Ban Display glasses already offer a convincing glimpse of a future that once seemed perpetually out of reach. The technology is unfinished, but for the first time, it feels believable.
Meta Ray-Ban Display gets it right
Invisible, wearer-only display
Excellent outdoor readability
Neural wristband gesture control
Live captions in real time
Built-in messaging apps
Subtle navigation interface
Hands-free music control
Immediate photo and video review
Seamless AI integration
Non-gimmicky everyday use





