Pretrained Models Collapse MLOps Stack
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
The big shift is that better pre trained models move machine learning from an engineering project to a product feature. When a team can get a working classifier from 10 to 100 examples instead of 10,000, they stop needing separate vendors for labeling, experiment tracking, model registries, and deployment plumbing. The center of gravity moves to a simple workflow where a product manager or developer uploads examples, checks predictions on their own data, and ships.
-
In the old workflow, expert teams stitched together tools like Scale for labeling, Weights & Biases for experiments, model repositories, and deployment monitoring. Nyckel’s view is that stronger transfer learning collapses that stack into one API where the user mostly interacts with inputs and outputs, not models.
-
This does not mean all MLOps disappears. It means the growth shifts. Simple classification and extraction use cases get productized for non experts, while heavier infrastructure remains for companies running large custom or self hosted models, where deployment, observability, and GPU orchestration still matter.
-
The pressure is strongest on commodity data labeling. As foundation models improve, generic crowd labeling becomes less central, but human input does not go away. It moves upmarket toward expert evaluation, domain specific fine tuning, and high judgment tasks in areas like legal, healthcare, and finance.
The market is heading toward a split structure. Most companies will buy machine learning the way they buy payments or messaging, through a simple API tied to their own data, while a smaller but important layer of infrastructure vendors serves teams building complex multi model systems and specialized custom models.