Humanoids Win Through Data Flywheels
Sankaet Pathak, CEO of Foundation, on why humanoids win in robotics
This is the inflection point that turns humanoid robotics from factory automation projects into AI and data businesses. Before transformers scaled, robots mostly handled narrow jobs in carefully staged settings, where every box arrived in the same spot and people were kept out of the work cell. Once end to end action models became viable around 2022 and 2023, the bottleneck shifted from hand written logic to collecting real world task data from deployments.
-
In the older stack, teams split the problem into separate steps, vision to detect objects, rules to classify the scene, then code to choose motions. That works for repeatable jobs like palletizing, but it breaks when layouts change, objects are misplaced, or humans move through the scene.
-
The newer stack learns a direct mapping from sensor inputs and instructions to actions. DeepMind’s RT-2 framed this as turning vision and language into robot actions, and Foundation describes the same shift as the moment robots could start operating in unstructured industrial environments instead of only pre scripted cells.
-
That change also explains why deployment data is now the core moat. Foundation, Figure, Tesla, and others are competing less on whether a robot can move its arms, and more on who can get robots into real workflows, log failures, use teleoperation when needed, and feed those edge cases back into training fastest.
From here, the winners are likely to be the companies that turn early industrial jobs into a compounding data flywheel. Humanoids that can slot into existing factories without 12 to 18 month retrofits get deployed sooner, gather more edge case data sooner, and improve their policies sooner, which is why narrow initial use cases can be the launchpad for much broader autonomy.