Product Owners Define Model Behavior
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
This shifts the center of gravity in AI from model building to workflow ownership. In practice, the person who knows what counts as spam, fraud, abuse, or a valid invoice is the one best positioned to define labels, review mistakes, and decide whether predictions are good enough to ship. That is why Nyckel hides model choice and training behind a simple data upload and API flow, while larger platforms like Dataiku package the same idea for bigger enterprises.
-
Nyckel is built around the idea that customers bring their own examples, label roughly 100 data points, see cross validated results on their own data in seconds, and deploy immediately. That makes model training look less like research and more like configuring a product feature.
-
This is also why old data labeling heavy workflows get squeezed. Scale grew by serving massive, data hungry workloads like autonomous driving, but the market has moved toward pre trained models, RLHF, and lighter weight tuning, where fewer examples and tighter domain context matter more.
-
The category is splitting in two. Tools like Dataiku are moving upmarket by giving business teams a GUI to build AI apps inside enterprise guardrails, while newer infrastructure players like Outerport show that the remaining complexity is shifting toward deployment, monitoring, and cost control behind the scenes.
From here, more of the ML stack turns into an internal utility rather than a craft practiced by a dedicated specialist on every project. The winning products will make domain experts fast, while quietly handling evaluation, deployment, and infrastructure underneath. That pushes the market toward bundled platforms and API like experiences, not stand alone point tools.