
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- The Meta Ray-Ban Display is the company’s most advanced smart glasses.
- The smart glasses feature an in-lens color display and a neural wristband for $799.
- Both the experience and the demo of the glasses impressed me.
AI has evolved beyond chatbots and into smart glasses, with the goal of providing language processing models in real-world context from your perspective. Meta Ray-Ban Display Take that assistance to the next level with the in-lens color display and Neural Wristband, letting you do most things on your phone directly with your eyes.
Also: I Tried the Meta Ray-Ban Display Glasses and They Get Me Excited for the Post-Smartphone Era
The smart glasses are so bold and promising that, since launching at Meta Connect last month, it has been almost impossible to book the necessary demo before purchasing the glasses. However, I finally got a chance to try out this pair and was impressed by its form factor, multimodal AI assistance, and a bonus unreleased feature.
Here’s what I learned from my hands-on demo of the smart glasses — including what to expect and whether they’re worth the $799 price tag and effort of booking a session.
A smartphone for your face?
A true indicator of whether a wearable is worth carrying is whether it can replace a device you already have, like your smartphone. I was surprised that the Meta Ray-Ban Display passed this test, at least from my initial demo.
I was able to use the Glasses for most of the tasks I normally use my phone for, including playing music, replying to messages, taking photos, browsing social media, and even making video calls. In each instance, the experience was quite seamless due to the combination of the in-lens display and the Neural Band.
Despite the glasses looking like regular lenses on the outside, they have an in-lens color display in your right eye. In my experience, it produces clear, sharp views even when making video calls or watching videos like Instagram Reels. Unlike glasses with optical modules, which I’ve tested a lot, the in-lens display makes it comfortable to look at and pretty hard for anyone else to tell you’re doing so.
Also: I tried smart glasses with a built-in display and they beat my Meta Ray-Bans in many ways
The Neural Band, which wraps snugly around your wrist, allows you to interact with the display with the most subtle finger movements. For example, the pinch between your index finger and thumb is used to select, while the pinch between your middle finger and thumb is used to return to the home screen.
Although it takes a second to get used to, it’s quite intuitive, especially because the gestures can be subtle. At one point, I was gesturing with my hand inside my sleeve, and with a light tap, I could still interact with the glasses intuitively.
When the ability to see vivid visuals is combined with the Neural Band, it basically allows you to access everything on your phone, but with the convenience of no one knowing. For example, while sitting in a meeting or class, I could technically reply to a text or scroll through Instagram without anyone knowing.
To further enhance this assistance, Meta is introducing a handwriting gesture feature that enables you to type text by mimicking the writing motion with your fingers, essentially drawing words in the air. I got to demo it and was impressed by how accurate the text was even when I tried using both script and print.
CNET: Meta’s Bosworth hints that neural bands may eventually evolve into a clock
This motion was even easier to get used to than other gestures, because we are physically used to making those gestures when writing with pens and pencils, and since you are not concerned about your handwriting, you can become sloppy.
AI assist leveled up
The original Meta Ray-Ban already offered Meta AI assistance from within the smart glasses to provide more information about what you were looking at, translate conversations live for you, or answer any questions you might have.
Even after reviewing them, I was quite satisfied with the AI, as it had practical and useful applications. Now, displays do the same thing, but with an extra layer of visual component, which really makes a huge difference.
For example, if you’re cooking and want Meta AI to give you a recipe, first you could ask and hear the answer. However, this time, you can take it a step further, seeing visual cards created for you with content and step-by-step instructions, while also remaining hands-free as you can use Neural Band gestures to swipe.
Also: Meta gives advertisers new AI personalization tools when using your chat to target content
In one example, I asked Meta AI to identify the flower I was looking at. The Assistant not only spoke the answer out loud but also displayed the name and image of the flower in my viewfinder.
Later, I asked her to draw an image of a woman carrying the same flower on her head. The AI instantly produced the image, which was visible directly in the viewfinder. Although it is difficult to see immediate practical use for on-the-spot image generation, it was a compelling demonstration of the technology. The double thumb tap gesture, shown here, activates Meta AI.
One of the more practical uses of Meta AI is the ability to display real-time captions of the world around you. This feature can be especially useful for people who are hard of hearing, but it also has everyday benefits – like following a conversation in a noisy restaurant or crowded room. In my testing, transcriptions were largely accurate, even though the environment was relatively quiet.
Do they pass the test of everyday wear? Perhaps.
Let’s talk aesthetics. With personal wearable devices like smart glasses, it’s more important than ever for them to look good. I was wary that the Meta Ray-Ban Display would be extremely clunky, since they would have to pack a lot of technology into a small form factor.
While there’s no denying that they look less like regular glasses than the standard Meta Ray-Bans, they can still pass as regular glasses, albeit statement glasses on the thicker side. Maybe it’s because I already wear black frames daily, or just because larger frames suit me better, but I found the design surprisingly attractive and wearable. The sand color shown below is a subtle neutral tone that helps soften the look by blending more naturally with the skin.
Ideally, smart glasses should be comfortable enough to wear all day long. Even if the battery dies, those with prescription lenses will still need to wear them. The Meta claims up to six hours of mixed usage per charge, so comfort becomes equally important as performance.
Also: 5 reasons why I use local AI on my desktop – instead of ChatGPT, Gemini or the cloud
In my brief time with them, I was pleasantly surprised by how comfortable they felt. The fit and weight were similar to my everyday thick glasses, so if you already wear frames regularly, you’re unlikely to experience much discomfort. For reference, the glasses weigh 69 grams – slightly heavier than the Meta Ray-Ban Gen 2’s 52 grams.
The Neural Wristband felt exactly as you’d expect – like a comfortable, flexible band. It wasn’t so severe that it was bothersome, although my demo lasted less than an hour. Long-term wear testing will be required to properly assess comfort for both devices.
Bottom line (for now)
Meta Ray-Ban Display Smartglasses feel like a well-developed product that can really provide users with significant improvements to their everyday lives while offering style, comfort, and promising use cases.
However, at the $799 price point, they’re not for everyone just yet. Rather, they should attract early tech adopters and those interested in experiencing what the future of AI wearable devices could hold. The company makes a solid case for it.

