Handshake acqui-hires Cleanlab for data quality
Handshake
This move shows Handshake wants to own the hardest part of AI data work, not just the labor supply. Handshake already had access to a large pool of students, alumni, PhDs, and professionals through its university network and Handshake AI. Adding Cleanlab gives it an in house team focused on catching bad labels, stress testing datasets, and building evaluation methods, which pushes Handshake closer to a quality infrastructure vendor for frontier labs, not just a marketplace for experts.
-
In practice, Cleanlab brings people who work on finding mislabeled examples, weak spots in datasets, and model evaluation. Handshake said the team would deepen capabilities in evaluations, AI safety, RL environments, and frontier data specifications, which are the workflows labs use to decide whether human data is actually improving a model.
-
That matters because the market has shifted from cheap, high volume annotation toward smaller pools of expensive experts in fields like math, law, medicine, and science. In that world, one bad batch of labels is costly, so vendors compete on trust and accuracy as much as on how many contractors they can recruit.
-
It also separates Handshake from peers. Mercor is built around sourcing and vetting experts at scale. Prolific emphasizes participant profiling, self serve research, and rapid study execution. Handshake can now combine expert supply with a deeper internal research layer around data quality, which is closer to the role Scale and other infrastructure heavy vendors play.
The next step is a tighter bundle where Handshake sells experts, workflow design, and quality assurance together. If that works, the company can turn its AI unit from a fast growing contractor marketplace into a more durable research and evaluation platform, with deeper frontier lab relationships and more insulation from swings in campus recruiting.