Developer-Led Self-Serve ML Tooling
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
This points to a shift from labor marketplaces to self serve ML products. When a developer or product manager labels 50 to 100 examples themselves, they skip the slow part of classic annotation, which is writing rules for outside workers, checking whether those workers understood the task, and paying for repeated review cycles. That works when the task is narrow and the person closest to the product already knows the edge cases, like fake profile photos, wilted plants, or nuanced content moderation.
-
Nyckel is built around that workflow. Customers bring their own examples, label them in the UI in about 20 minutes for roughly 100 samples, see cross validated results on their own data in seconds, then deploy immediately. The product is designed for a tech lead or PM, not a dedicated ML team.
-
The contrast with Scale style labeling is concrete. Scale documentation emphasizes detailed task instructions for external labelers, because the annotator does not already know the product context. That overhead makes sense for huge, open ended datasets like autonomous driving, but much less for small internal classification jobs.
-
Comparable tools are converging on the same pattern. Roboflow packages annotation, dataset versioning, and model assisted labeling in one interface, which reduces handoffs between data collection, labeling, and training. The market is moving toward one product where the domain expert can do the first pass directly.
The next leg of the market favors tools that turn subject matter experts into occasional model builders. The winners will make labeling feel like tagging examples inside a product workflow, then automatically handle training, evaluation, versioning, and deployment behind the scenes. That pushes standalone annotation labor further toward large scale edge cases and away from everyday enterprise AI use cases.