Berkeley's ViTacFormer: Expert-Level Robot Dexterity
UC Berkeley's ViTacFormer enables robots to make burgers with human-like dexterity using AI vision and touch sensing in 2.5 minutes.
"RoboPub" Publication: 20% Discount Offer Link.
Recently, a video of a "humanoid robot making burgers" has gone viral online!
This humanoid robot, equipped with [active vision], [high-precision tactile sensing], and [high-dexterity hands], achieved 2.5 minutes of continuous autonomous control for the first time, starting from raw materials and step-by-step creating a complete burger before handing it to your plate.
Truly enabling robots to "see clearly," "touch precisely," and "move skillfully" - future kitchens might really not need humans anymore!
Dexterous manipulation is a key capability for robots to achieve human-like interaction, especially in tasks involving multi-stage, delicate contact that demand extremely high requirements for control precision and response timing. Although vision-driven methods have developed rapidly in recent years, single visual perception often fails in conditions of occlusion, lighting changes, or complex contact environments.



