Fewer ML Engineers Needed
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
This points to ML becoming an embedded product capability, not a specialist headcount function. As models need fewer labeled examples and more of the workflow gets wrapped into APIs, the bottleneck shifts from tuning models to defining the right inputs, labels, and review loop. In practice, that favors tools where a product manager, developer, or domain expert can upload examples, check outputs on their own data, and ship a feature without building a full MLOps stack.
-
Nyckel is built around that workflow. Customers bring their own text, images, or tabular data, label roughly 10 to 100 examples in the UI, see cross validation on those exact examples, and deploy quickly. That is a very different labor model from hiring ML engineers to stitch together annotation, training, deployment, and monitoring tools.
-
Scale started from the opposite end of the market. Its early growth came from massive, edge case heavy labeling workloads in autonomous vehicles and defense, where thousands of contractors and audit trails mattered. As foundation models reduce the amount of hand labeled data needed, Scale has had to move up stack into integrated tooling and model services.
-
The real substitution is not human judgment, it is infrastructure work. The content moderator or operations owner still decides what counts as spam, fraud, or unsafe content. What disappears is much of the custom plumbing that used to sit between that domain expert and a working model.
The market is heading toward a split. High stakes, edge case dense workflows will keep supporting vendors built around human review and tightly managed pipelines, while the much larger pool of everyday classification and prediction tasks will be served through simple APIs and self serve tools. That shift should keep compressing demand for in house ML engineering outside the biggest and most specialized teams.