Microsoft buys GPUs from CoreWeave

Diving deeper into

CoreWeave at $2B revenue

Document
their ostensible competitor Microsoft (now their biggest customer)
Analyzed 5 sources

This reveals that CoreWeave was winning not just on price, but on something more valuable, immediate access to scarce Nvidia capacity when Azure could not provision enough GPUs fast enough for OpenAI driven demand. In practice, Microsoft was still selling AI services to enterprises, but it was buying wholesale compute from a smaller rival to keep Azure OpenAI and ChatGPT workloads moving, which turned a competitor into CoreWeave's fastest path to hyperscale revenue.

  • Microsoft and CoreWeave compete for the same large AI infrastructure workloads, but the bottleneck was GPU supply, not customer demand. CoreWeave had unusually strong Nvidia allocation, while Microsoft was still racing to add enough capacity inside Azure, which made outsourcing rational even for a hyperscaler.
  • The product overlap is real, but the workflow is different. Azure sells a broad enterprise cloud stack, while CoreWeave focused on dense GPU clusters for training and inference. That specialization let CoreWeave stand up large reserved clusters quickly, which is what a buyer like Microsoft needed in a shortage.
  • This kind of frenemy relationship is becoming a pattern in GPU cloud. Smaller specialists build capacity faster and sell it upstream to bigger clouds, then use those contracts to borrow more, buy more chips, and widen the gap over peers like Lambda and Together that serve smaller or more flexible workloads.

Going forward, the strategic question is whether CoreWeave can convert stopgap demand into permanent platform demand before Azure internalizes the capacity gap. If it can keep adding power, software, and long term contracts faster than hyperscalers can absorb demand themselves, it stays a core supplier rather than a temporary overflow valve.