OpenAI becomes cloud neutral supplier

Diving deeper into

SOTA model nightclub hype cycle

Document
it can now serve its models through both AWS & Google Cloud.
Analyzed 4 sources

This marks OpenAI’s shift from being effectively tied to Azure to acting like a model supplier that can meet customers where their compute and procurement already sit. In practice, that means an enterprise running most workloads on AWS or Google Cloud can buy OpenAI capacity without replatforming around Microsoft, and OpenAI can route demand across more infrastructure when a new model launch creates a sudden traffic spike.

  • For years, Microsoft was the main path for enterprises to use OpenAI at scale, especially through Azure OpenAI. The newer setup lowers that dependency and gives OpenAI more room to sell directly into enterprise and developer accounts that standardize on AWS or Google Cloud.
  • This also changes reliability economics. When one frontier model gets hot and usage surges, the bottleneck is GPUs, not demand. More cloud channels means more places to secure clusters, spread inference load, and avoid the degraded performance that has repeatedly pushed developers toward the rival lab.
  • It pulls OpenAI closer to the playbook used by more enterprise oriented rivals like Cohere and Mistral, which have emphasized flexibility in where models run. The difference is that OpenAI is doing it from a position of massive existing demand, with $25B annualized revenue by February 2026 and over $500B in disclosed cloud commitments.

The next step is a more cloud neutral OpenAI that sells through every major channel while keeping its own apps, like ChatGPT and Codex, as the highest margin demand engine. If that works, the winner in frontier AI will be the lab that not only has the best model for the month, but the widest pipe to deliver it everywhere customers already are.