Power Scarcity Over GPU Supply
Lambda customer at Iambic Therapeutics on GPU infrastructure choices for ML training and inference
The bottleneck is shifting from buying chips to feeding them electricity at scale. Once Nvidia can ship enough top end GPUs, the winners are the clouds that already control power, land, and utility timelines, because a rack of H100s or B200s is useless until a data center can actually energize and cool it. That favors hyperscalers and the biggest GPU clouds over smaller neo clouds that can source servers but not megawatts.
-
CoreWeave has moved furthest in turning power access into a moat. It has roughly 850MW of active power capacity, about 3.1GW contracted, and is pursuing multi gigawatt buildouts, including a West Texas project with on site generation. That is a different game than simply reserving more Nvidia supply.
-
Lambda has won with flexibility and developer experience, letting teams slice large clusters into smaller jobs and pay for GPU hours consumed. But its own research framing now points to the next requirement, raising capital for data center buildout before the market becomes constrained by sites, interconnects, and utility contracts.
-
Crusoe shows the alternative path, move compute to power instead of waiting for the grid. By building near stranded natural gas, it cuts electricity cost sharply and turns energy sourcing into product advantage. That model is more capital intensive, but it explains why power procurement is becoming as strategic as GPU procurement.
Going forward, GPU cloud competition will look more like a race to assemble power portfolios than a race to list hourly GPU prices. The strongest providers will pair chip access with long dated energy contracts, owned or controlled sites, and purpose built data centers, while everyone else is pushed toward brokerage, software layers, or narrower regional niches.