OpenArt's Move From Images to Video
Coco Mao, CEO of OpenArt, on building the TikTok for AI video
The micro-pivot shows that fast growth in AI images was not enough because the real moat is owning the whole storytelling workflow, not just one generation step. OpenArt grew by making image creation easy and capturing search demand, but management concluded that image tools were becoming interchangeable. The new direction is to turn a rough idea into script, storyboard, character set, clips, and finished video inside one product.
-
OpenArt’s early engine was breadth and ease of use. It offered 100 plus fine-tuned image models, no prompting editing workflows, and SEO landing pages that pulled in artists, hobbyists, and SMBs. That worked to reach roughly $12M ARR by February 2025, but it did not create durable product separation from Midjourney, Ideogram, or other image apps.
-
The pivot is concrete in the product. Instead of asking users to prompt every frame, OpenArt is automating the sequence people currently stitch together across ChatGPT, image generators, image to video tools, and audio editing. Features like consistent characters matter because they solve the continuity problem that breaks story based content.
-
This puts OpenArt into a different lane from both power user model labs and pure infrastructure. Runway and Sora skew toward users who want clip level control. Fal.ai sells fast model access to developers. OpenArt is aiming at creators and SMBs who want a push button path from idea to social video, similar to how Higgsfield packaged multiple models for marketers.
Going forward, the winners in generative media are likely to be the products that hide model complexity and become the default place where characters, templates, and story assets live. If OpenArt keeps turning manual creative work into a repeatable flow for short form video, the company can move from an SEO driven image app into a higher retention storytelling platform.