Turning Models Into Video Products
Diving deeper into
Cristóbal Valenzuela, CEO of Runway, on the state of generative AI in video
A research paper is not a product. Training a model is not a product.
Analyzed 4 sources
Reviewing context
The strategic point is that durable AI companies win by turning raw model capability into a repeatable workflow that saves users time and money. Runway moved from a model playground to a web editor where filmmakers and marketers can remove backgrounds, rotoscope shots, generate scenes, review versions, and collaborate in one place. That is what turns research into software people keep paying for.
-
Runway’s early product market fit came from watching filmmakers use ML models for boring editing chores like rotoscoping, inpainting, transcription, and noise removal. The company then wrapped those capabilities in a browser based workflow built for speed, not just model demos.
-
The clearest comparison is with open model ecosystems and horizontal labs. Open source work like Latent Diffusion spread widely, but Runway’s paid product came from adding video ingestion, rendering, low latency playback, collaboration, and pricing that mapped to customer value rather than raw compute.
-
This is also why Runway sits in a different lane from OpenAI, Pika, and OpusClip. Horizontal labs ship general models, and narrower apps solve one job. Runway is trying to own a video specific stack that helps small teams do work that used to require a larger post production crew.
Going forward, more video models will become interchangeable and cheaper, which pushes value even further toward workflow, proprietary data, and distribution. The companies that matter most will be the ones that make AI video feel reliable inside real production environments, not the ones that only publish stronger benchmarks.