"Robot's Physical Turing Test": NVIDIA's Jim Fan Reveals Embodied Scaling Law in 17-Minute Talk
NVIDIA's Jim Fan unveils the Physical Turing Test & embodied AI scaling laws in robotics. Watch the 17-min talk insights.
"RoboPub" Publication: 20% Discount Offer Link.
Jim Fan, Head of NVIDIA's Robotics Division, Distinguished Scientist, Co-Leader of the GEAR Lab, and OpenAI's first intern, recently gave a 17-minute speech at Sequoia Capital’s AI Ascent event. The talk introduced the “First Principles for Solving General Robotics Problems,” covering data strategies for training robot AI, the Scaling Law, and a promising future based on physical APIs.
A key highlight was the “Physical Turing Test,” which describes a scenario where, given a real physical environment and an instruction, a human or a robot processes the environment according to the instruction. The test is whether others can distinguish whether the outcome was handled by a human or a robot.
Clearly, Jim Fan and NVIDIA are working toward enabling robots and AI to pass this Physical Turing Test. In this article, we summarize the main points of Jim Fan’s speech and include a poll at the end to ask when you think the Physical Turing Test will be cracked.
A few days ago, a blog post caught my attention. It said: “We passed the Turing Test, and nobody noticed.” The Turing Test was once sacred, the holy grail of computer science, and yet we just passed it.
When o3 mini takes a few extra seconds to think, or Claude fails to debug your pesky code, you feel annoyed, don’t you?
Then we treat every breakthrough in large language models as just another ordinary Tuesday. You all are the toughest crowd to impress.
So, I want to propose something very simple called the “Physical Turing Test.”