Moving Models to Production Faster

Diving deeper into

Cristóbal Valenzuela, CEO of Runway, on rethinking the primitives of video

Interview
The moment you figure out a way of moving models to production faster, you are able to deliver value faster, and sometimes that’s really hard for big companies to do.
Analyzed 7 sources

This is why small applied AI companies can outrun bigger labs even when the underlying research is comparable. In video, shipping a model is not just training weights. It means making inference fast enough, cheap enough, and stable enough that editors and filmmakers can actually use it inside a workflow. Runway built its own research, deployment, and editing stack together, which let it turn model gains into product features and revenue faster than slower moving incumbents.

  • For video products, production is the hard part. Beyond model quality, teams need encoding, streaming, latency tuning, dataset prep, safety systems, and UI that lets someone iterate on clips without waiting forever. That is why a strong research demo can still take years to become a useful product.
  • Runway’s edge came from owning the full path from model training to the editor. That let it keep improving inference speed and wrap models in tools like rotoscoping, background replacement, camera control, and scene consistent generation, instead of stopping at a paper or API.
  • The payoff shows up in market speed. Runway launched Gen-3 Alpha in June 2024, added an API in September 2024, partnered with Lionsgate that same month, and grew from about $25M ARR in 2023 to roughly $84M in 2024 as product releases converted quickly into adoption.

Going forward, the winners in AI video are likely to be the companies that shorten the loop from research to usable workflow. As models improve, advantage shifts toward teams that can package new capabilities into fast, low cost, collaborative tools before larger platforms work them through slower release cycles.