Runway Turns VFX Into Software

Diving deeper into

Cristóbal Valenzuela, CEO of Runway, on the state of generative AI in video

Interview
What used to take dozens of teams working around the clock for months or even years is now feasible for a small creative team that can leverage new technologies, like Runway’s Green Green, very fast.
Analyzed 5 sources

This is the core wedge for AI video, it turns visual effects from a labor scaling business into a software scaling business. In practice, tools like Runway collapse repetitive frame by frame work, like masking a person, swapping a background, or carrying the same edit across a shot, so a tiny team can finish work that once needed a large VFX crew, expensive seats in legacy software, and long handoff chains.

  • Runway started by automating painful post production jobs inside a browser editor. Its tools cut work like rotoscoping from more than six hours per shot to about 10 minutes, which is why a six person VFX team on Everything Everywhere became such a powerful proof point.
  • The business model mirrors the workflow shift. Legacy tools like Nuke sell costly annual seats for expert artists, while Runway sells low cost subscriptions, from $12 to $76 per month, plus generation usage, bringing per shot economics down from about $350 to about $10.
  • What matters is not just raw generation, but control inside a filmmaking workflow. Runway pairs its own video models with editing tools, web collaboration, and now studio data access through its September 18, 2024 Lionsgate partnership, which trains a custom model on Lionsgate's 20,000 title library.

The next step is that small teams will not just finish effects faster, they will previsualize, generate, revise, and deliver whole sequences inside one AI native stack. That pushes video production toward the same shift design saw with Figma, where the winning product is the one that combines creation, iteration, and collaboration in one place.