Gamma's Shallow AI Stack
Jon Noronha, co-founder of Gamma, on building AI-powered slides
Gamma’s edge came less from proprietary ML infrastructure and more from wrapping basic model calls in a product that could actually finish the job of making a deck. Early on, the system was mostly OpenAI API calls plus prompt routing, few shot examples, and an internal prompt studio. The harder work was connecting text generation, image generation, layout changes, and manual editing into one flow that got users from blank page to usable presentation fast.
-
The stack was shallow because the product was early and the tooling market was immature. Gamma launched before most LLMOps tooling was ready, so the team built around prompts first, then added evaluation and monitoring later as deck volume scaled into the millions per month.
-
Gamma did not rely much on reinforcement learning or fine tuning at first. It mostly used prompt engineering, routing a user request into different prompt chains depending on whether the user wanted a rewrite, a new card, a visual transformation, or image help.
-
That lightweight AI layer fit Gamma’s broader strategy. The company won by pairing AI generation with a card based editor that reflows across devices and supports manual cleanup, unlike many early AI slide tools that generated a draft but lacked deep editing. That helped Gamma grow from about $30.5M ARR at the end of 2024 to about $102M by October 2025.
The next phase is deeper model orchestration and evaluation, not a sudden shift to giant in house model training. As foundation models move into slides directly, Gamma’s path is to keep turning general models into a purpose built workflow for presentations, docs, and microsites, while adding more testing, fine tuning, and cost efficient model mixing behind the scenes.