Flat ChatGPT Pricing Caps Revenue

Diving deeper into

SOTA model nightclub hype cycle

Document
ChatGPT’s fixed subscription consumer business model lacks expansion and hasn’t meaningfully reflected the revenue opportunity with ads
Analyzed 5 sources

The main weakness in ChatGPT’s consumer business is that a heavy user and a light user often pay the same $20, which caps spend right where AI products are becoming more expensive and more useful. OpenAI’s consumer revenue is still led by paid subscriptions, while newer ad tests are limited to Free and Go tiers. By contrast, coding and agent products are adding usage driven expansion, where more work done means more revenue per user.

  • OpenAI’s largest revenue source is still ChatGPT subscriptions, with Plus at $20 per month and roughly 15 million active subscribers by mid 2025. That is a huge business, but it behaves more like Netflix than AWS, because revenue does not automatically rise when a user asks 10x more questions or runs heavier workflows.
  • OpenAI has started ads inside ChatGPT, but only in a limited beta for logged in US adults on Free and Go. Ads appear at the bottom of answers, are excluded from sensitive topics, and require large minimum advertiser commitments. That means ads are real, but still too narrow to show up as a major revenue unlock yet.
  • The market is moving toward metered AI products. Perplexity pushed heavy users from flat plans toward Max, Enterprise Max, and extra Computer Credits. Anthropic’s Claude Code reached $2.5B annualized revenue by February 2026, with 50% from enterprise. These products expand because customers pay more as the system does more work.

The next leg of monetization is likely to come from turning ChatGPT from a flat priced chatbot into a layered product with ads for broad consumer usage and usage based pricing for agentic work. The companies that win from here will be the ones that can convert engagement into higher spend per user, not just more users.