No side quests for Lambda
Jan-Erik Asplund
TL;DR: After killing its on-prem hardware business and landing NVIDIA and Microsoft as anchor customers, Lambda has crossed over from “developer-friendly CoreWeave” to competing on the same turf in selling to frontier labs and hyperscalers. Sacra estimates Lambda hit $760M in annualized revenue at the end of 2025, up 79% YoY. For more, check out our full report and dataset on Lambda.


We first covered Lambda in February 2024 in our CoreWeave report as the $250M/year developer-friendly alternative to CoreWeave, then again in September 2025 at $505M/year as it hired Morgan Stanley, JPMorgan, and Citi for a 2026 IPO.
Key points from our 2026 update via Sacra AI:
- Sacra estimates Lambda hit $760M in annualized revenue at the end of 2025, up 79% YoY from $425M, valued at $5.9B in its $1.5B Series E (TWG Global) for a 7.8x multiple—compared to CoreWeave (NASDAQ: CRWV) at $5.13B in 2025 revenue, up 168% YoY, with a ~$63B market cap for a 12x multiple after having gone public at $2B in revenue.
- After growing its on-premise physical pre-configured GPU workstation and server business to ~$100M as of 2024, serving 97% of top U.S. research universities, Lambda shut it down in August 2025 as GPUs became too scarce & valuable to sell outright versus rent by the hour at hyperscaler scale.
- A month after the hardware wind-down, Lambda deprecated its Model Inference API and Chat AI Assistant products to concentrate GPU capacity on multi-year training contracts with predictable utilization and locked-in long term contracts, ceding self-serve inference to the likes of RunPod ($120M annualized revenue), Modal ($50M annualized revenue), and Baseten ($585M raised, Greylock).
- Going all-in on large-scale supercluster build-outs as a pure-play neocloud, Lambda signed a $1.5B four-year deal with NVIDIA to host 18,000 GPUs for its own researchers, and a separate multi-billion-dollar, multi-year Microsoft deployment deal for tens of thousands of GPUs, mirroring that CoreWeave's pre-IPO anchor contracts with NVIDIA and Microsoft.
- The shift upmarket increasingly puts Lambda into head-to-head competition with CoreWeave for frontier lab training contracts, with both winning contracts through their priority GPU access courtesy of Nvidia's drive to seed non-hyperscaler customer-partners, while hyperscalers (AWS, Google, Azure) focus on serving high-uptime infrastructure for reliable inference at scale.
For more, check out this other research from our platform:
- Lambda Labs (dataset)
- Lambda's IPO
- Lambda customer at Iambic Therapeutics on GPU infrastructure choices for ML training and inference
- Voltage Park customer at robotics company on GPU pricing and robotics computing needs
- RunPod customer at Segmind on GPU serverless platforms for AI model deployment
- Fluidstack (dataset)
- Crusoe (dataset)
- CoreWeave (dataset)
- Together AI (dataset)
- CoreWeave: the $465M/year cloud GPU startup growing 1,760% YoY
- GPU clouds growing 1,000% YoY
- Samiur Rahman, CEO of Heyday, on building a production-grade AI stack
- Scale (dataset)
- OpenAI (dataset)
- Anthropic (dataset)