Data as the Abstraction Layer
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
The key shift is that AI is being packaged less like developer infrastructure and more like a form that turns examples into software. In this model, the user does not pick architectures, tune training jobs, or wire up monitoring. They upload examples of inputs and desired outputs, review real predictions on their own data, and let the system handle model selection, deployment, retraining, and label prioritization behind the scenes.
-
Nyckel built its product around this workflow, customers upload their own text or image examples, annotate roughly 10 to 100 samples, see cross validated results in seconds, and deploy without needing to understand which model is running underneath.
-
This is the opposite of classic MLOps. Expert teams buy separate tools for labeling, experiment tracking, model registry, deployment, and monitoring. The data first abstraction collapses that toolchain into one interface, which is why it is aimed at product managers, tech leads, and developers at smaller companies.
-
The broader market has moved in this direction. Dataiku has grown by giving non technical teams a GUI that bundles data prep, model building, and now generative AI app creation, while Scale expanded beyond labeling as foundation models changed what customers need from raw annotation vendors.
Going forward, the winners in AI tooling are likely to be the companies that make data the product surface and hide the rest of the stack. That pushes the market toward integrated platforms, where value comes from how quickly a business user can turn a small labeled dataset into a working feature, not from how many separate ML tools they can assemble.