Annotation Firms Must Move Up Stack
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
Pure annotation is becoming a weaker standalone business because the value is moving from cheap labor that draws boxes or tags examples, to software that helps a product team turn a small set of domain specific examples into a working model. In practice, that means annotation vendors need to own more of the workflow, like data selection, model training, evaluation, and deployment, or shift toward higher value human work where expertise matters more than volume.
-
The original annotation boom was tied to autonomous driving, where companies generated huge volumes of edge case image and sensor data and paid per task to label it. As models improved and AV spending cooled, that core demand engine became less dependable.
-
The replacement product is much more self serve. Instead of writing long instructions for offshore workers, a product manager or developer uploads 50 to 100 examples, labels them directly in the UI, checks predictions on their own data, and ships a custom classifier in days.
-
The companies that adapted fastest have moved up stack. Scale expanded from workforce led labeling into data engines and APIs, while newer players increasingly sell expert evaluation, safety testing, and model quality tooling rather than only raw annotation throughput.
The category is heading toward fewer labor marketplaces and more integrated AI workbenches. The winners will combine lightweight labeling, model testing, and deployment in one loop, or specialize in expert human judgment for evaluation and safety where foundation models still cannot replace people.