Lambda builds owned GPU data centers
Lambda Labs
Lambda is trying to turn GPU cloud from a brokerage business into an infrastructure business with lower unit costs and tighter control over delivery. Owning more of the data center stack matters because training customers buy reserved clusters for 12 to 18 months, need identical GPUs wired together with high speed InfiniBand, and care about lead times and uptime as much as headline price. More owned capacity makes those commitments easier to fulfill and less dependent on landlords.
-
The Kansas City site is a concrete jump in scale, not just a land bank. Lambda said the facility is expected to open in early 2026 with 24MW, more than 10,000 NVIDIA Blackwell Ultra GPUs, and room to expand past 100MW, which pushes it toward the same AI factory logic larger neoclouds use to lock in future supply.
-
This also changes the cost structure. Lambda already prices H100 PCIe around $2.49 per hour versus roughly $4.25 at CoreWeave, and it has said owning facilities should reduce long term per unit costs versus leasing. For customers running reserved training clusters at $500,000 to $1M per month, even small cost and reliability gains matter.
-
The closest comparison is the broader neocloud playbook. CoreWeave is locking in power at far larger scale, with about 850MW active capacity and 3.1GW contracted by late 2025, while Crusoe is pairing cloud contracts with long duration infrastructure and power deals. Lambda is following the same path, but aimed at developer and midmarket training demand rather than hyperscaler sized anchor tenants.
The next phase of competition in GPU cloud will be won less by who can source a few more chips, and more by who can secure power, cooling, and ready to use space before demand arrives. Lambda’s buildout moves it closer to that model, where the product is no longer just cheap GPUs on a dashboard, but guaranteed clusters delivered on time at predictable economics.