Shared foundation models with adapters
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
This is a unit economics point disguised as a model architecture point. Nyckel is saying the winning AI product for everyday classification work is not the one with the smartest single model, it is the one that can share the expensive part of inference across many customers, then add tiny customer specific layers on top. That keeps latency low, makes self serve deployment practical, and avoids turning every new account into a dedicated GPU bill.
-
Keeping one large model hot for everyone is manageable because the fixed compute cost is spread across the whole customer base. Fine tuning a separate large model per customer breaks that math, because each customer now needs its own loaded model and reserved memory.
-
This is why Nyckel centers the product on customer data, not on exposing model choices. Users upload 10 to 100 labeled examples, see cross validated results on their own data in seconds, and deploy a custom classifier without running a full MLOps stack or managing dedicated model serving.
-
The broader market has split along this boundary. API model providers win when a shared general model is good enough, while tools like Nyckel and Dataiku win by wrapping that raw model power in a cheaper, safer workflow for specific business tasks and non expert users.
Over time, this pushes AI tooling toward hybrid systems, shared foundation models for the heavy lifting, lightweight adapters or workflows for customization, and more routing across multiple models by cost and task. The companies that win will be the ones that hide this complexity while keeping per customer serving costs close to software margins.