SpaceX building orbital data centers
SpaceX
This reveals SpaceX trying to turn its biggest internal strength, cheap launch, into a new compute business. The basic idea is simple. Earth data centers are increasingly limited by grid hookups, cooling water, and long power approval queues. In orbit, satellites can get near constant solar power, but the economics only work if Starship cuts launch cost enough to make sending up heavy chips, power systems, and radiators cheap enough.
-
The bottleneck this is meant to solve is concrete. Large AI clusters need huge amounts of electricity and cooling, and operators can wait 3 to 5 years for utility interconnection on Earth. The orbital version swaps grid access for solar collection, but adds a new thermal problem, because heat must be dumped through large radiators.
-
SpaceX is unusually well positioned because it can bundle three layers at once. It can launch the hardware with Starship, move data through the existing Starlink network, and fill the first racks with demand from xAI and Tesla workloads before selling spare capacity to outside cloud customers.
-
The closest comparison is not AWS or Google Cloud directly, but a GPU cloud like CoreWeave with its own captive demand. The difference is that SpaceX would own the rocket, the satellite platform, and the network, which means every improvement in Starship cost lowers not just launch expense, but the delivered cost of compute.
If Starship reaches the flight rate and cost curve needed for frequent heavy launches, space based compute can become a fourth pillar after launch, Starlink, and Starshield. That would push SpaceX further up the stack, from moving payloads for others to selling power, bandwidth, and AI compute as an integrated orbital utility.