Runway Enables Fully Generated Films

Diving deeper into

Cristóbal Valenzuela, CEO of Runway, on rethinking the primitives of video

Interview
I think we're not far from being able to completely generate a film.
Analyzed 5 sources

The important point is that AI video is moving from a faster editing tool into a full production stack. Runway started by automating painful post production work like rotoscoping, inpainting, transcription, and background cleanup, then expanded into models that can generate shots, control camera motion, and keep characters consistent across scenes. That makes complete film generation less a science project and more the logical end state of replacing one production step after another.

  • Runway found early demand from editors who wanted to remove frame by frame labor, because much of editing time was spent on repetitive work. That foothold matters, because companies that own cleanup and revision workflows are well placed to own generation next, once users trust them with the rest of the pipeline.
  • By 2025, Runway was no longer just a bag of effects. Its Gen-3 system added camera direction, frame expansion, and character consistency, and studio deals like Lionsgate gave it access to a large film library for training and preproduction workflows. That is the infrastructure needed to move from clips toward coherent long form output.
  • The market is also splitting in a useful way. Horizontal model companies ship general video generation, while workflow products like Runway and Higgsfield package models into tools creators actually use to storyboard, revise, and ship work. In practice, complete film generation will likely arrive first through products that manage the whole workflow, not just the raw model.

The next phase is software that can take a treatment, generate scenes, keep style and characters stable, and then let humans direct the result with lightweight edits instead of manual production. As model quality rises and workflow software absorbs more of the pipeline, film creation starts to look less like shooting and assembling footage, and more like supervising a generative system.