Workflow Moats Power AI Apps
Jon Noronha, co-founder of Gamma, on building AI-powered slides
This reveals how thin the first generation of AI application moats really were. Gamma did not win early by inventing a proprietary model, it won by wrapping a general purpose OpenAI endpoint in a much better workflow for making decks, then quickly layering on routing, prompt chains, and product specific interactions as usage scaled. That is the standard pattern for many breakout AI apps, start with a model API, then build the orchestration and UX that turn raw model output into a usable product.
-
Gamma had already spent years building the card based editor before the AI relaunch in March 2023. That meant the model only had to generate a first draft into an existing responsive format, while users could keep editing manually. The underlying product depth mattered as much as the OpenAI call.
-
The company says the AI launch changed growth from a normal Product Hunt bump into sustained viral adoption, with signups jumping from hundreds per day to around 10,000 per day. That shows the API was the spark, but onboarding and activation were the real business effect.
-
By July 2023, Gamma was already moving beyond one model call. It was classifying user intent, routing requests to different prompts, mixing text and image generation, building internal prompt testing tools, and shifting more hosting to Azure. That is the move from demo layer to production AI system.
Going forward, the winners in AI applications are likely to look less like model labs and more like workflow companies. As model access gets cheaper and more interchangeable, advantage shifts to who has the best editor, the best evaluation loop, the most product specific prompt chains, and the clearest path from instant generation to a finished artifact people actually keep using.