Meta Could Commoditize Video Generation
Luma AI
Meta matters here because distribution can beat product quality in consumer AI. If Meta puts video generation inside the apps where billions already post, message, and make Reels, a huge share of casual demand stops being a paid software purchase and becomes a free button inside an existing social workflow. That is the same pattern that hurt AI writing startups once ChatGPT and built in writing tools made basic generation feel abundant.
-
Luma is still selling a standalone creation tool. It had about $8M of annualized revenue in December 2024, charges consumers by subscription and credits, and prices API usage around $0.35 for a 5 second 720p clip. That works when generation itself is scarce, but gets squeezed if big platforms train users to expect free output.
-
Meta has already moved from research to product surface area. By July 2024 it was rolling out image generation inside feeds, stories, comments, and messages across Facebook, Instagram, Messenger, and WhatsApp. In June 2025 it launched AI video editing in the Meta AI app, meta.ai, and Edits, explicitly framing that as a first step toward video generation and editing across its apps.
-
The closest precedent is AI writing. Jasper and Copy.ai built fast prosumer businesses on top of frontier models, then saw their SMB base flatten or decline after ChatGPT and built in copilots made generic writing cheap. In video, that pushes independents toward platforms like Runway and Luma only if they offer better control, consistency, and workflow depth than a social app can.
The market is heading toward a split. Basic social video generation will be embedded, cheap, and widely distributed by Meta, Google, and OpenAI. Standalone winners will be the companies that move up the stack into repeatable professional workflows, where teams need scene consistency, brand control, editing tools, and production systems that a free consumer feature does not provide.