API Layer as Data Layer
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
This is a bet that custom ML becomes a product workflow, not an engineering project. In practice, Nyckel is trying to make training look like filling out a table. A team uploads examples of inputs and desired outputs, the system tests many models behind the scenes, shows whether predictions match the team’s own data, then serves the result as an API endpoint. That collapses labeling, training, evaluation, deployment, and monitoring into one surface.
-
The contrast is with tools like SageMaker and Vertex AI, where users still step through separate jobs for data prep, training pipelines, and deployment. Nyckel’s point is that most product teams do not want knobs for model selection, they want a working yes or no classifier on their own data as fast as possible.
-
This also shifts who does the work. Instead of outsourced labelers or an internal ML engineer writing instructions, the product owner or domain expert labels 10 to 100 examples directly, checks cross validated predictions on their own samples, and iterates until the output is good enough for production.
-
The broader market has been moving in the same direction, but from higher in the stack. Scale expanded from labeling into model tuning as LLM demand surged, while Dataiku bundled ingest, prep, AutoML, and visualization into a GUI for non technical teams. Nyckel pushes that simplification even further down to a bare input output API.
The end state is that many narrow classification and extraction jobs will be bought like payments or messaging infrastructure. As models need fewer examples and more of the stack gets standardized, the winners will be the companies that turn messy customer data into reliable predictions with the fewest steps, the fastest feedback, and the least need for ML specialists.