Home  >  Companies  >  CoreWeave
CoreWeave
NASDAQ: CRWV
Cloud GPU provider offering production-grade compute infrastructure for AI model training and deployment

Revenue

$5.13B

2025

Valuation

$23.00B

2024

Funding

$3.50B

2024

Growth Rate (y/y)

730%

2024

Details
Headquarters
Roseland, NJ
CEO
Michael Intrator
Website
Milestones
FOUNDING YEAR
2017
IPO
March 2025

Revenue

Sacra estimates that CoreWeave generated $5.1B in revenue in 2025, up 170% YoY from $1.9B in 2024, driven by surging demand for GPU compute from cloud providers, LLM companies, and AI application developers.

The quarterly ramp was steep throughout the year: $981.6M in Q1, $1.213B in Q2, $1.365B in Q3, and $1.572B in Q4, with adjusted EBITDA of $606.1M, $753.2M, $838.1M, and $898M across those quarters respectively. FY2025 net loss widened to $1.167B (vs. $863M in FY2024), while full-year adjusted EBITDA reached $3.093B (vs. $1.219B in FY2024). Microsoft accounted for approximately 67% of FY2025 revenue, underscoring the degree to which near-term revenue remains concentrated even as the backlog diversifies.

The backlog is anchored by several landmark customer agreements. OpenAI contracted up to ~$22.4B in total commitments (via a March 2025 initial deal of up to $11.9B, expanded by up to $4B in May 2025 and up to $6.5B in September 2025). Meta committed up to ~$14.2B through December 2031 under an original agreement, then expanded with an additional ~$21B agreement through December 2032, implying up to roughly $35.2B of total Meta commitments if fully utilized. Anthropic signed a multi-year agreement in April 2026 to support development and deployment of Claude models, with compute coming online later in 2026. Jane Street signed a $6B AI cloud agreement in April 2026, marking a significant expansion of CoreWeave's customer base into quantitative finance. Nvidia signed a $6.3B order form under a take-or-pay capacity backstop arrangement through April 2032, and $10B of the original $17B in booked contracts came from Microsoft. Revenue backlog stood at $66.8B as of December 31, 2025.

Valuation & Funding

CoreWeave went public on March 28, 2025, trading on Nasdaq under the ticker CRWV. The IPO priced at $40.00 per share, with 37.5 million shares offered (36.59M sold by CoreWeave, 910K by selling stockholders). Prior to the IPO, CoreWeave was valued at $23 billion following a secondary share sale with participation from Jane Street, Fidelity Management, and BlackRock.

In January 2026, Nvidia made a $2.0B private placement investment at $87.20 per share (22,935,780 Class A shares), as part of an expanded collaboration targeting more than 5GW of AI factories by 2030. In April 2026, Jane Street made a $1B investment in CoreWeave alongside signing its $6B cloud agreement.

CoreWeave has raised approximately $28B in combined equity and debt financing over the 12 months through March 2026. Major debt issuances include: $2.0B in 9.250% Senior Notes due 2030 (May 2025), $1.75B in 9.000% Senior Notes due 2031 (July 2025), a $2.6B delayed-draw secured term loan (July 2025), $2.6B in 1.75% convertible senior notes due 2031 (December 2025), $1.75B in 9.750% Senior Notes due 2031 (April 2026), and $3.5B in 1.75% convertible senior notes due 2032 (April 2026, with an option for purchasers to buy up to an additional $500M). In March 2026, CoreWeave closed an $8.5B financing facility (DDTL 4.0), initially drawable up to ~$7.5B and expandable to $8.5B as assets stabilize, rated A3/A(low), carrying a SOFR + 2.25% floating tranche and a ~5.9% fixed tranche, maturing March 2032—the first investment-grade-rated GPU-backed financing. CoreWeave also expanded its revolving credit facility from $650M to $1.5B in May 2025, then to $2.5B in November 2025. Notable investors include Nvidia as a strategic partner, while Coatue led the $1.1B Series C round. Additional key investors include Magnetar, Macquarie Capital, and Pure Storage.

Product

CoreWeave was founded in 2017 as Atlantic Crypto, an Ethereum mining company that bought Nvidia graphics processing units (GPUs) both to mine its own crypto and rent out GPU servers to other crypto miners. In early 2019, Atlantic Crypto changed its name to CoreWeave and pivoted to providing GPUs-on-demand for generalized computing purposes rather than focusing on crypto.

Through this period, CoreWeave built infrastructure for delivering that GPU compute across seven global facilities—positioning them well for the flood of demand for GPU compute that arrived in 2022 with the generative AI boom.

Today, CoreWeave is a GPU-first cloud platform and integrated AI development environment that lets developers and businesses access compute remotely the same way they would with Amazon Web Services or Azure—while also providing the MLOps tooling needed to build, track, and refine models on top of that infrastructure.

What differentiates CoreWeave is their far greater availability of the high-end GPUs that are designed for training and running large, complex, AI workloads. With 45,000 GPUs, CoreWeave is the largest private provider of GPUs in North America. CoreWeave was one of the first cloud providers to offer access to the Nvidia H100 Tensor Core GPUs on its platform, and its most favored nation relationship with Nvidia has allowed it to both scale faster to meet demand and offer higher-powered GPUs at a time when AWS and Azure customers report persistent resource shortages. Its current GPU lineup includes the NVIDIA HGX B300 (added March 2026), with CoreWeave expecting to be among the first providers to deploy NVIDIA Vera Rubin NVL72 and Vera CPU racks in production in the second half of 2026.

For the AI text adventure game AI Dungeon, which is based on GPT-2, serving 1.6M users of their game drove response times up and cost too much to continue running the product on AWS's Cortex GPU compute platform. Switching to Tesla V100 GPUs delivered via CoreWeave's cloud cut AI Dungeon's response time down by 50%.

CoreWeave has moved up the value chain through a series of MLOps acquisitions, positioning itself as an integrated compute-plus-developer-tooling platform rather than a pure infrastructure provider. The centerpiece is Weights & Biases—a leading experiment-tracking and model-monitoring platform acquired for approximately $1.029B (closed May 2025), which has since added capabilities for reinforcement learning and agent-development workflows. Rounding out the stack are OpenPipe, a reinforcement learning fine-tuning platform (acquired September 2025), and Marimo, an AI-native notebook environment (acquired October 2025), giving developers an end-to-end workflow from raw compute through model iteration and deployment.

Business Model

CoreWeave, like other cloud providers, operates on a model where it rents out computing resources (such as GPU power) to businesses and developers.

CoreWeave's ~85% gross margins come from the difference between the cost of maintaining these resources (including the initial investment in hardware, ongoing electricity, cooling, maintenance, and support staff costs) and the revenue generated from customers paying to use these resources.

Customers pay CoreWeave for the computing power they use, typically on a per-hour basis. This payment model is attractive to customers because it allows for flexible scaling of resources based on demand, and they only pay for what they use. CoreWeave sets the rental price based on market demand, the specific GPU model (newer models with better performance command higher prices), and the operational costs to ensure a profitable margin.

CoreWeave, like AWS, has an expansion motion in offering different kinds of services on top of the basic product of GPU compute. So far, CoreWeave has added on specialized solutions for data storage, networking, and CPU compute, each priced on a similar pay-as-you-go basis.

Expenses

CoreWeave incurs a significant upfront cost when purchasing GPUs and setting up data centers. However, these GPUs have a useful life of several years, during which CoreWeave can continually rent them out. The operational costs include electricity (GPUs are power-hungry), cooling (to prevent overheating), and staffing (for maintenance and customer support).

Improving the efficiency of data center operations (e.g., reducing electricity consumption, negotiating better rates for electricity, or improving cooling systems) can lower operational costs and thus improve margins.

Margin

The cost of a GPU for CoreWeave includes the purchase price and the operational costs over its lifespan. The revenue from a GPU is the cumulative amount paid by customers to rent the GPU over time. CoreWeave aims to maximize the utilization of each GPU to ensure that the revenue generated far exceeds the cost.

Margins are generally lowest on CoreWeave's higher-end GPUs. For example, a high-end H100 PCIe card might cost CoreWeave roughly $30,000. That GPU is then rented out at an average of $4.25 per hour. Assuming an 80% utilization rate, it would generate roughly $29,473 in revenue per year ($1/hour 12 hours/day 365 days/year)—roughly break-even assuming that they don't hit 100% utilization.

However, cheaper GPUs like the A40, which CoreWeave could have bought in bulk in 2021 before the generative AI boom, could generate much greater margins. At 80% utilization, an A40—which had a sticker price of $4,500 three years ago and is now rented out by CoreWeave at $1.278 per hour—could generate $8,877 in revenue every year.

Competition

None

The market for GPU cloud services is highly competitive, with several key players, including major cloud providers like Amazon Web Services, Google Cloud and Azure as well as upstarts like Lambda Labs and Together AI, each offering unique advantages and targeting different segments of the AI and machine learning industry.

Big Cloud

The biggest long-term competition for CoreWeave is likely to be the major three cloud providers: Google Cloud ($75B in revenue in 2023), Amazon Web Services ($80B in revenue in 2023) and Microsoft Azure ($26B in revenue in 2023). With far greater revenue scale—vs. CoreWeave's ~$465M in 2023—the big cloud platforms have the resources to invest both in acquiring GPUs and in developing their own silicon alternatives to Nvidia's GPUs.

So far, CoreWeave has been able to outmatch the biggest cloud providers on access to GPUs because they've enjoyed preferential treatment from Nvidia, which has allocated GPUs away from Amazon, Google and Microsoft and towards CoreWeave. Notably, CoreWeave is the only major cloud provider customer of Nvidia's that is not developing its own AI chips to try to compete with Nvidia, making it a good customer for Nvidia to support.

Lambda Labs

Like CoreWeave, Lambda Labs is a public cloud provider that purchases GPUs from Nvidia and rents them out to AI companies and companies building AI features. Also like CoreWeave, Lambda Labs has received generous allocations of Nvidia GPUs and was in talks with Nvidia for investment in 2023, but as of February 2024, that deal hasn't happened.

Lambda Labs is generally positioning itself as a better option for smaller companies and developers working on less intensive computational tasks, offering Nvidia H100 PCIe GPUs at a price of roughly $2.49 per hour, compared to CoreWeave at $4.25 per hour. On the other hand, Lambda Labs does not offer access to the more powerful HGX H100—$27.92 per hour for a group of 8 at CoreWeave—which is designed for maximum efficiency in large-scale AI workloads.

Lambda Labs generated about $20M in revenue in 2020 and was projecting $250M in 2023 and $600M in 2024 as of July last year. Lambda Labs is backed by Thomas Tull’s US Innovative Technology fund, B Capital, SK Telecom, Crescent Cove, Mercato Partners, 1517 Fund, Bloomberg Beta, and Gradient Ventures.

Together

Together is fundamentally a GPU reseller that rents GPUs from CoreWeave, big cloud platforms like Google Cloud, and from other sources—academic institutions, crypto miners, other companies—and then rents those GPUs out to startups and AI companies, then bundling that in with software for training and fine-tuning open source AI models like Meta's Llama 2, Midjourney's Stable Diffusion, and its own RedPajama.

Sacra estimates that Together hit $10M in annual revenue run rate at the end of 2023, with 90% of that revenue coming from Forge, their bundled compute-and-training product that launched in June 2023. Forge promises A100 and H100 Nvidia server clusters at 20% of the cost of AWS.

TAM Expansion

To date, CoreWeave's rapidly accelerating growth has been driven by high demand for GPUs and compute, combined with low supply. CoreWeave's favored partner status with Nvidia has allowed them to offer better availability than the major cloud platforms while also undercutting them on price.

Looking forward, the key dynamics in understanding CoreWeave's durable advantage hinge on (1) the long-term state of the GPU industry, (2) CoreWeave's ability to build a differentiated AI compute platform, and (3) its expanding power and infrastructure footprint.

GPUs

At the root of Nvidia's GPU shortage is a limitation at TSMC—Taiwan Semiconductor Manufacturing Company. The key shortage there is on chip-on-wafer-on-substrate (CoWoS) packaging capacity, which is used by all GPUs in the manufacturing process. TSMC expects the current shortage to last until about March 2026, and has announced plans to build a $2.9B packaging facility that will be operational in 2027, further alleviating constraints.

The major cloud providers, as well as companies like Tesla, Meta and OpenAI, wanting to escape the dynamics of this shortage, have all begun or accelerated work on their own AI processors. That said, they're also dependent on TSMC to actually make their chips—and with Nvidia being one of TSMC's biggest and longest-term customers, Nvidia could still have an advantage on manufacturing, at least until shortages are completely alleviated.

Tech

CoreWeave's infrastructure has been designed from the ground up to serve GPU compute at scale—going back to 2017, when the company was working on Ethereum mining as Atlantic Crypto. That architectural focus has produced meaningful performance advantages: CoreWeave reported record-breaking LLM benchmark results using Nvidia HGX H100 instances, with its platform coming in 29x faster than the next-fastest competitor. That result is a sign that CoreWeave could compete with the major cloud providers even if the present GPU shortages come to an end.

CoreWeave's acquisitions of Weights & Biases, OpenPipe, and Marimo represent a deliberate move up the value chain into developer tooling and MLOps, expanding the addressable market beyond raw compute toward the broader AI development workflow.

Infrastructure Scale

CoreWeave's infrastructure ambitions extend well beyond its current cluster footprint. Active power capacity stands at roughly 850MW as of Q4 2025—up ~260MW in that quarter alone—with total contracted power reaching ~3.1GW. That contracted base is itself a stepping stone: CoreWeave's collaboration with Nvidia targets more than 5GW of AI factories by 2030, incorporating multiple future Nvidia platform generations including Rubin GPUs, Vera CPUs, and Bluefield storage systems.

One concrete expression of this buildout is Project Horizon, a West Texas development CoreWeave is pursuing alongside Nvidia-backed Poolside, targeting up to 2GW of AI compute powered by on-site natural gas generation. CoreWeave is anchoring the first 250MW, expected to be online by end-2026, with a further 500MW reserved for subsequent phases. Geographic expansion is also underway, with CoreWeave committed to a new Lancaster, Pennsylvania data center (up to $6B investment; initial 100MW, expandable to 300MW) and a further £1.5B UK commitment (September 2025) that brings total UK investment to £2.5B.

CoreWeave's vertical integration ambitions hit a setback when its proposed all-stock acquisition of data center operator Core Scientific—valued at approximately $9.0B and structured to add roughly 1.3GW of gross power capacity plus 1GW+ of expansion potential—was terminated in October 2025 after Core Scientific shareholders did not approve it. CoreWeave is pursuing power capacity through organic development and partnerships instead.

Risks

Customer concentration: Microsoft accounted for approximately 67% of FY2025 revenue, and while the backlog is diversifying toward OpenAI (~$22.4B), Meta (~$35.2B implied), and Anthropic, contract renegotiations, cancellations, or customers developing in-house compute capacity could materially impair revenue.

Debt burden: CoreWeave has raised approximately $28B in combined equity and debt over the 12 months through March 2026, including high-yield notes, a $2.6B secured term loan, and an $8.5B DDTL facility, alongside a $2.5B revolving credit facility. FY2025 net losses of $1.167B combined with these fixed-rate obligations constrain financial flexibility if demand softens.

GPU supply dependency: CoreWeave's competitive position rests on preferential Nvidia GPU allocations, a relationship deepened by Nvidia's $2B equity investment and $6.3B take-or-pay capacity backstop. If Nvidia shifts allocation priorities or faces its own supply constraints, CoreWeave's ability to fulfill backlog commitments and win new customers would be at risk.

News

DISCLAIMERS

This report is for information purposes only and is not to be used or considered as an offer or the solicitation of an offer to sell or to buy or subscribe for securities or other financial instruments. Nothing in this report constitutes investment, legal, accounting or tax advice or a representation that any investment or strategy is suitable or appropriate to your individual circumstances or otherwise constitutes a personal trade recommendation to you.

This research report has been prepared solely by Sacra and should not be considered a product of any person or entity that makes such report available, if any.

Information and opinions presented in the sections of the report were obtained or derived from sources Sacra believes are reliable, but Sacra makes no representation as to their accuracy or completeness. Past performance should not be taken as an indication or guarantee of future performance, and no representation or warranty, express or implied, is made regarding future performance. Information, opinions and estimates contained in this report reflect a determination at its original date of publication by Sacra and are subject to change without notice.

Sacra accepts no liability for loss arising from the use of the material presented in this report, except that this exclusion of liability does not apply to the extent that liability arises under specific statutes or regulations applicable to Sacra. Sacra may have issued, and may in the future issue, other reports that are inconsistent with, and reach different conclusions from, the information presented in this report. Those reports reflect different assumptions, views and analytical methods of the analysts who prepared them and Sacra is under no obligation to ensure that such other reports are brought to the attention of any recipient of this report.

All rights reserved. All material presented in this report, unless specifically indicated otherwise is under copyright to Sacra. Sacra reserves any and all intellectual property rights in the report. All trademarks, service marks and logos used in this report are trademarks or service marks or registered trademarks or service marks of Sacra. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any report is strictly prohibited. None of the material, nor its content, nor any copy of it, may be altered in any way, transmitted to, copied or distributed to any other party, without the prior express written permission of Sacra. Any unauthorized duplication, redistribution or disclosure of this report will result in prosecution.