SpaceX as Orbital CoreWeave
CoreWeave of space
SpaceX is trying to turn AI compute from a customer of launch services into a second platform business layered on top of launch and Starlink. The core idea is simple, if Starship makes payload to orbit cheap enough, SpaceX can sell the rocket ride, the in orbit network connection through Starlink, and eventually the compute itself, with xAI acting as the first guaranteed tenant and proving demand before outside cloud buyers arrive.
-
This looks like CoreWeave in one key sense. CoreWeave rents scarce GPU capacity to AI builders and wrapped that hardware in production tooling, reaching $229M in 2023 revenue and $1.9B in 2024. SpaceX is applying that same infrastructure logic to orbit, but with vertical control over launch and network transport that CoreWeave does not have.
-
The economic unlock is launch cost. SpaceX frames Starship V3 as the vehicle that could push launch toward $10 to $100 per kg, versus far higher historical costs, which is what would make putting heavy power and compute payloads in orbit worth modeling seriously. Without that cost collapse, space data centers stay a science project.
-
Compared with other GPU infrastructure companies, the closest analog is Crusoe, which used an energy edge to build AI compute economics that incumbents could not easily match. Crusoe built near stranded gas, while SpaceX is aiming for an even bigger step change, moving compute to near constant solar power in orbit while using Starlink as the data pipe.
If Starship reliability improves and xAI keeps scaling, SpaceX can be valued less like a rocket contractor and more like an AI infrastructure stack. That would make orbital compute less a side bet and more a new revenue layer, with Starlink becoming the default network fabric for moving AI workloads between Earth and orbit.