Physical AI Is Making XR, VR, AR Relevant Again

physical AI is making XR relevant again

Just posted to my Forbes column about why XR might not be dead after all … it may just be misunderstood.

For years, extended reality has been framed as a metaverse moonshot that didn’t quite land, especially after Meta poured tens of billions into Reality Labs and then pivoted to a “year of efficiency.” But what if all that investment didn’t fail … it just found a different payoff? According to Amy Peck of EndeavorXR, XR’s real long-term win may be robotics. The same spatial mapping, object tracking and environment reconstruction that let you place a virtual couch in your living room are now helping robots navigate warehouses, factories and real-world environments. In other words, AR is quietly becoming spatial intelligence infrastructure.

Meanwhile, companies like Nvidia are using VR-style digital twins to train robots in photorealistic simulations before they ever hit a factory floor. Thousands of accelerated simulations per day beat real-world trial and error every time. It may not look like Ready Player One — but it’s essential for scaling physical AI.

And for humans? Smart glasses are inching toward something even bigger: always-on intelligence augmentation. Meta’s Ray-Ban and Oakley collaborations are improving fast, and competitors like Google, Samsung and Apple are all circling lighter, more wearable AR devices. Translation, summaries, contextual prompts — even teleprompter features — hint at a future where digital context is layered continuously into daily life.

“I look at it as a spectrum of reality,” Peck told me. “We are going to be augmented to a certain degree, or not at all.”

XR may not have become the metaverse we were promised. But as the backbone of robotics and the foundation of always-on human augmentation, it may end up being far more consequential than anyone expected.

Read the full post on my Forbes column …

Subscribe to my Substack