Lambda shifts to AI compute cloud
Lambda Labs
Lambda is becoming a pure play AI compute provider, not a workstation seller. The important shift is mix, not just size. Cloud revenue overtook hardware in mid 2024, with cloud reaching about $250M annualized versus roughly $150M for hardware, and annualized revenue rising again to $505M by May 2025. That points to stronger repeat usage, because GPU rental grows when customers keep training models, reserve clusters, and expand workloads over time.
-
The product is sticky because teams do not just rent a chip, they rent a working training setup. Customers use Lambda for reserved clusters with InfiniBand networking, custom storage, Kubernetes support, and 18 month plus contracts, with one interviewed customer spending $500,000 to $1M per month on Lambda training workloads alone.
-
Lambda is winning a specific slice of the market. CoreWeave scaled faster by signing giant enterprise deals, reaching an estimated $2B of 2024 revenue versus Lambda at a projected $600M, but Lambda repeatedly shows up as the lower cost option for researchers and growth stage teams that care more about price and flexibility than massive dedicated deployments.
-
That pricing gap matters because cloud GPUs are increasingly shopped like infrastructure inputs. Lambda has been positioned at about $2.49 per H100 PCIe hour versus $4.25 at CoreWeave, and customer interviews show final vendor choice can come down to price per GPU once networking and specs are comparable.
The next phase is moving from cheaper GPU hours to a more complete developer cloud. As the raw rental market gets more competitive, Lambda’s upside comes from turning reserved clusters, one click provisioning, and researcher friendly tooling into an AWS like stack that keeps customers on platform as they go from experimentation to full production AI workloads.