
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Physical AI is the latest trending frontier of technology.
- It leverages real-world data for more autonomous robots.
- Its initial stages may be on your face right now.
The release of ChatGPIT three years ago sparked an AI craze. While AI models are becoming more capable, in order to be as helpful as possible to people in their everyday lives, they need access to everyday tasks. This is only possible by allowing them to be outside of the chatbot on your laptop screen and currently in your environment.
Enter the industry’s latest buzzword: Physical AI. The term was on full display at the Consumer Electronics Show (CES) last week, with almost every company promoting a new model or hardware that could contribute to moving the sector forward, including Nvidia. During the company’s keynote, CEO Jensen Huang compared the importance of physical AI to the release of ChatGPIT.
“The chatty moment for physical AI has arrived – when machines begin to understand, reason and act in the real world,” he said.
What is physical AI?
Physical AI can generally be defined as AI implemented in hardware that can sense the world around it and then cause or organize actions. Popular examples include autonomous vehicles and robots – but robots that use AI to perform tasks have existed for decades. So what’s the difference?
Also: Can Google Save Apple AI? Gemini will power a new, personalized Siri
According to Anshuman Saxena, VP and GM of automated driving and robotics at Qualcomm, the difference lies in the robot’s ability to reason, take action, and interact with the world around it.
“The whole idea of a chain of thoughts, a logic, a brain, that will work in a context and perform some of the same tasks as humans do – that’s the real definition of physical AI,” Saxena said.
For example, a humanoid robot would be able to go one step beyond performing tasks such as carrying ingredients or packages as instructed, and instead be able to sense its environment to perform the task intuitively.
Also: Nvidia’s Rubin AI could change computing as we know it
However, the examples do not have to be that detailed; In fact, according to Ziad Asghar, SVP and general manager of XR, Wearables and Personal AI at Qualcomm, you may already have a prime example of physical AI.
“Smartglasses are already the best representation of physical AI,” Asghar said. “They’re a device that’s basically there and able to see what you’re seeing; they’re able to hear what you’re hearing, so they’re in your physical world.”
a symbiotic data relationship
Saxena says that although humanoid robots will be useful in cases where humans do not want to perform a task, either because it is too tedious or too risky, they will not replace humans. This is where AI wearables like smart glasses play an important role, as they can enhance human capabilities.
Also: CES 2026: These 7 smart glasses caught our attention — and you can buy a pair now
But beyond that, AI wearables may actually be able to feed back into other physical AI devices like robots by providing high-quality datasets based on real-life viewpoints and examples.
“Why are LLMs so good? Because there is a lot of data on the internet, a lot of relevant information and whatnot, but the physical data is not there,” Saxena said.
The problem he describes is one that often hinders physical AI development. Because training robots in the real world is so risky, such as driving autonomous cars on the road, companies must create synthetic data simulations to train and test these models. Several companies attempted to tackle this issue at CES.
Also: I’m an AI expert, and this note-taking pin is the most reliable hardware I’ve tried at CES
Nvidia released new models that understand the real world around you and can be used to create synthetic data and simulations that simulate realistic life scenarios. Qualcomm offers a comprehensive physical AI stack that combines a new Qualcomm Dragonwing IQ 10 series processor released at CES with the tools needed for AI data collection and training.
Creating datasets for this training is often a time-consuming and expensive process. However, robots can use data from wearables that people already use every day, which is effectively physical AI data that is true to human experiences.
“Think about these sensors, glasses, and so many other things that are out there, if I have the glasses on, and I take action based on ‘Oh, I saw something here,’ there’s so much information generated instantly that can even help robots today, creating a whole new set of information,” Saxena said.
Also: I tried Gemini’s ‘scheduled actions’ to automate my AI – the potential is huge (but Google has to work on it)
Given the privacy concerns that arise from using your everyday data to train robots, Saxena highlighted that your wearable data should always be held at the highest level of privacy. As a result, the data – which must be anonymized in advance by the wearable company – can be very helpful in training robots. That robot can then create more data, resulting in a healthier ecosystem.
“This sharing of context, this sharing of AI between that robot and the wearable AI devices you have around you, I think, is the benefit that you’re going to be able to accrue,” Asghar said.

