Google’s Android XR platform recently showcased its capabilities through a live demo at the TED2025 conference, marking a significant step into the realm of mixed reality. Until now, our understanding of Android XR came primarily from a promotional video released by Google last year.
The live demonstration, however, provided tangible insights into how the technology functions in real-life scenarios. The presentation was led by Shahram Izadi, with Google’s Nishtha Bhatia displaying the smart glasses on stage.
One notable feature highlighted was their ability to support prescription lenses and connect to smartphones. Izadi even utilized the glasses to read his notes, illustrating their practical use in everyday situations.
The demo took a fascinating turn with the introduction of Gemini, Google’s AI assistant. It began generating a haiku spontaneously, but the standout moment occurred when Nishtha asked Gemini to identify a book title visible behind her.
The AI not only recognized it instantly but also performed various tasks, such as locating a hotel key card, translating a sign from English to Farsi, and seamlessly switching to Hindi without needing to adjust settings. Additional features demonstrated included visual recognition capabilities, which allowed Gemini to identify a vinyl record and play a relevant song.
The presentation also showcased turn-by-turn navigation with a 3D map integrated into the glasses’ display, rounding off with immersive XR headset demos, highlighting Samsung’s Project Moohan. If these features make it into the final product, Google’s Android XR could pose serious competition to Apple’s Vision Pro.
With Samsung expected to launch its version of these smart glasses, the future of Google’s XR technology looks promising. Overall, the practical and context-aware nature of Gemini indicates that smart glasses may soon evolve into indispensable tools rather than mere technological experiments.