Creators Fine-Tuning Video Models
Cristóbal Valenzuela, CEO of Runway, on the state of generative AI in video
This points to a shift in where competitive advantage sits, away from hand labeling giant generic datasets and toward owning the feedback loop between users, workflows, and model customization. For Runway, that matters because video quality is not just about whether a model recognizes objects, it is about whether it learns a studio, brand, or creator’s exact visual taste. A filmmaker using Runway is not preparing abstract training data, they are teaching the system what a usable shot, style, and edit should look like in their domain.
-
Runway’s product has always been full stack, from model training to dataset preparation to deployment inside editing workflows. That makes user behavior unusually valuable. Every mask correction, background swap, and accepted output can become signal for what good video work looks like in a specific creative context.
-
The practical meaning of end user data preparation is less labeling millions of frames by hand, and more collecting a few hundred or thousand real examples from production use, cleaning them, and fine tuning for one job. That is already how fine tuning tools are being used in LLMs to improve reliability and lower cost on narrow tasks.
-
This also explains the split in the market. Tools like Runway are built for power users who want deep control over shots, motion, and consistency. Platforms like OpenArt increasingly bundle fine tuning, character persistence, and orchestration so users can personalize models without building the model themselves.
The next phase of generative video will look more like software customization than one size fits all AI. The winners will be the companies that make it easy for creators, marketers, and studios to turn their own footage, edits, and preferences into proprietary model behavior, then feed those improvements back into faster, cheaper production workflows.