OpenAI Redirects GPUs Toward Codex
Why Sora failed
This shift says OpenAI now sees coding as a better use of scarce compute than consumer video. Coding agents create daily, sticky workflow use inside companies, where a developer can prompt an agent to edit files, run tests, fix bugs, and ship code, then keep paying every month. The same GPU can support products that also feed back high value code edit data, strengthen model training, and open a larger enterprise budget pool than a standalone video app.
-
The coding market is already producing real software revenue. OpenAI was reported pursuing Windsurf for $3B when Windsurf was at $40M ARR in February 2025, and the attraction was not just revenue, but owning the IDE surface, user workflow, and 100B plus daily code edit tokens flowing through the product.
-
Anthropic showed why this matters. By March 2025, about 80% of Anthropic revenue came from API usage, much of it from coding products like Cursor, Bolt.new, and Windsurf, and Claude Code then grew to $400M annualized revenue by July 2025 as Anthropic pushed from model supplier into a direct coding product.
-
OpenAI has the distribution to fight back fast. Codex now spans app, CLI, IDE, and cloud, and OpenAI can bundle coding into ChatGPT subscriptions and enterprise plans, while its larger business had already reached an estimated $25B revenue run rate by February 2026, giving it room to spend aggressively on share in coding and enterprise workflows.
The next phase is labs competing less on raw model quality alone and more on who owns the developer workbench inside the enterprise. The winner is likely to be the company that turns coding agents into a default daily tool across terminals, IDEs, cloud environments, and workplace contracts, then uses that usage loop to keep improving models faster than video first competitors can match.