Data-level APIs for ML
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
This is a product bet that machine learning will be bought like payments or messaging, as a simple app feature instead of an engineering project. In practice, the customer is not choosing models, tuning thresholds, or wiring up training jobs. They upload examples of their own text or images, mark the right answers, and call an endpoint that returns predictions, while training, evaluation, deployment, and monitoring are hidden behind the product.
-
Nyckel is pushing the abstraction one step above AutoML. AutoML still gives an expert a model to inspect and manage. Nyckel instead shows whether the system works on the customer’s own examples, often after roughly 100 labeled samples and a fast retraining loop measured in seconds.
-
That sits in contrast to platforms like Vertex AI and SageMaker, where teams still prepare datasets, launch training jobs, deploy endpoints, and often run separate labeling workflows. Those products simplify ML infrastructure, but the user still interacts with the model stack, not just the business data.
-
The commercial implication is a split market. Enterprise platforms like Dataiku package many ML steps into a GUI for analysts and large companies, while Scale grew by selling data work and model tuning to expert teams. Nyckel is aimed lower in the stack, at developers and product owners who want one narrow task working fast.
The next leg of the market is toward data native AI products that collapse labeling, training, evaluation, and inference into one workflow. As pre trained models keep reducing the amount of example data needed, more ML demand moves from specialist teams to application builders, and the winners will be the companies that make custom prediction feel like adding any other API to a product.