Foundational Models Enable Platform Moats
Marketing executive at Bolt.new on AI code editor adoption patterns
The real scarce asset in AI app builders is not the model, it is the path from prompt to shipped product. Bolt, Lovable, and v0 all ride the same upstream wave of model gains, so the durable edge shifts to who owns user acquisition, workflow, deployment, and the surrounding stack. That is why Vercel can attach v0 to hosting, why Bolt leans on browser execution and sharing loops, and why Lovable is pushing into collaboration and visual editing.
-
These products are built on external model progress by design. Lovable explicitly orchestrates OpenAI and Anthropic models, and Bolt has been described as heavily reliant on Claude for core generation. When Anthropic and OpenAI ship stronger coding models, the whole category improves at once.
-
Because model quality is broadly accessible, winners tend to add a second moat around workflow. Vercel turns generated code into deployed apps on its own cloud. Bolt uses WebContainers, GitHub sync, and browser native sharing. Lovable adds visual edits, code mode, and multiplayer style collaboration around the generated app.
-
This also explains the split in pricing and margins. Bolt charges on tokens because each generation maps closely to frontier model cost, while tools like Cursor and Copilot can mix frontier models with smaller tuned models and sell flatter subscriptions. The business model follows how much proprietary infrastructure sits between the user and the model API.
The next phase pushes these tools away from being simple wrappers on top of labs and toward full application platforms. More value will accrue to the companies that bundle model access with deployment, collaboration, payments, analytics, auth, and reusable project graphs. In that world, foundational model providers power the engine, but distribution and product surface decide who captures the market.