Funding
$200.00M
2026
Revenue
Starcloud has not publicly disclosed revenue or ARR figures as of May 2026. The company describes Starcloud-2 as its first commercial mission, with full operations targeted for 2027, and Crusoe has indicated it plans to offer limited GPU capacity from space via a Starcloud satellite in early 2027.
Given that Starcloud launched its first on-orbit demonstrator only in November 2025 and remains pre-commercial, meaningful recurring compute revenue is unlikely before 2027. The clearest near-term commercial signal is a set of committed contracts with Earth-observation spacecraft operators, alongside strategic relationships with Nvidia, AWS, and Google Cloud. Crusoe is the clearest publicly named platform customer, with deployments planned for late 2026.
The revenue model implied by the roadmap has three layers: hosted compute sold to other spacecraft and satellite operators, sovereign cloud and storage services for terrestrial buyers, and eventually large-scale capacity reservations from hyperscalers structured more like energy offtake agreements than traditional cloud contracts. The first layer appears closest to commercialization, while the third represents the longer-term revenue opportunity.
Valuation & Funding
Starcloud raised a $170 million Series A in March 2026 at a $1.1 billion post-money valuation, reaching unicorn status 17 months after demo day. Benchmark led the initial tranche of the round, with EQT Ventures co-leading an extension. Other participants included Macquarie Capital, NFX, Nebular, Adjacent, 776 Ventures, Fuse Ventures, Manhattan West, and Monolith Power Systems.
Before the Series A, Starcloud had raised approximately $34 million across earlier rounds. Earlier backers include In-Q-Tel, Y Combinator, Soma Capital, and FUSE, alongside scout funds affiliated with Andreessen Horowitz and Sequoia. Total disclosed funding across all rounds stands at roughly $204 million.
Product
Starcloud builds and operates data centers in orbit. Instead of placing GPU servers in a terrestrial warehouse that requires grid power, cooling water, and years of permitting, it places compute hardware on a satellite, where solar energy is abundant and heat can be radiated into deep space.
The first spacecraft, Starcloud-1, launched in November 2025 carrying the first NVIDIA H100 GPU ever operated in orbit, roughly 100 times more GPU power than had previously been flown in space. In December 2025, it became the first satellite to run inference on a version of Gemini and the first spacecraft to train a language model, using nanoGPT. At about 60 kilograms, roughly the size of a small fridge, it served as an early proof point that data-center-class AI hardware can survive and operate under the radiation, thermal, and power conditions of orbit.
Starcloud-2 is the first commercial mission, expected to be fully operational in sun-synchronous orbit by 2027. It combines a GPU cluster, persistent storage, and 24/7 access in a smallsat form factor with proprietary thermal and power systems. The orbit choice is material: a dawn-dusk sun-synchronous orbit keeps the spacecraft in near-continuous sunlight, giving the solar arrays a capacity factor above 95% and reducing the need for large battery banks.
Cooling uses liquid loops to pull heat away from the chips and move it to large deployable radiators, which shed it as infrared radiation into deep space. The architecture is modular: compute containers dock into a larger structure through a unified port that combines power, networking, and cooling, allowing capacity to be added incrementally rather than through a single monolithic spacecraft.
Starcloud-3, the next planned spacecraft, is designed at 200 kilowatts and three tons, built around the assumption that Starship-era launch economics will make it cost-competitive with terrestrial data centers.
Business Model
Starcloud is a vertically integrated orbital infrastructure platform, owning spacecraft design, power generation, thermal systems, and compute payload integration rather than outsourcing those layers. It sells compute and storage capacity produced in orbit, making it structurally closer to an infrastructure provider than to a traditional satellite manufacturer or a pure software cloud.
Its go-to-market is B2B, with a B2B2C layer emerging through platform partners. Crusoe's agreement to deploy its cloud software on a Starcloud satellite is the clearest example: Starcloud provides the orbital infrastructure while Crusoe handles the customer-facing cloud interface, billing, and distribution. That structure lets Starcloud concentrate on engineering while the partner contributes customer relationships and developer tooling.
The model depends more on cost structure than on software margins. Starcloud's thesis is that the total cost of AI compute is increasingly driven by energy, cooling, and siting constraints, and that orbital infrastructure can change those inputs. The cited advantages are near-continuous solar power at high capacity factors, passive radiative cooling with no water consumption, and modular deployment without terrestrial permitting cycles. The CEO has said orbital compute becomes cost-competitive with terrestrial data centers if launch costs reach around $500 per kilogram, a threshold tied to frequent Starship operations that the company targets around 2028-2029.
The modular container architecture also supports recurring revenue: compute modules can be swapped, refreshed, and incrementally scaled, creating hardware refresh cycles and managed service revenue rather than one-off spacecraft sales. Over time, the model could expand from hosted compute into managed orbital clusters, resilient storage, and capacity reservation agreements with hyperscalers structured more like energy offtake contracts.
Competition
Starcloud competes in a category that barely existed two years ago and is now attracting capital from startups, aerospace primes, and large technology companies. The core competitive tension is between pure-play orbital compute specialists and vertically integrated platforms that can bundle launch, networking, and compute in a single procurement process.
Vertically integrated platforms
SpaceX is the most consequential long-term competitor. The FCC has accepted for filing a SpaceX application for an orbital data center system of up to one million satellites, explicitly linked to the existing Starlink optical inter-satellite link network. That would let SpaceX offer orbital compute bundled with launch, satellite manufacturing, and an operational communications backbone, a combination no pure-play startup can match. The risk is not just price competition, but category capture if SpaceX packages the full stack before Starcloud proves its economics.
Blue Origin presents a similar dynamic at an earlier stage. Its Blue Ring hosted-payload platform combines communications, power, data storage, and edge computing on a single in-space vehicle, while TeraWave targets high-throughput enterprise connectivity beginning in late 2027. Blue Origin does not need to outperform Starcloud on compute design. It only needs to make orbital hosting good enough while competing on bundled economics and government trust.
Connectivity-first orbital players
Kepler Communications is the most operationally advanced direct competitor in the near term. In March 2026, Kepler commissioned distributed on-orbit computing across its optical relay constellation, with 40 NVIDIA Jetson Orin modules across 10 satellites, making it the first commercially operational optical data relay network with cloud-like processing in space. Kepler's advantage is not raw GPU density, but compute built into a network-native, multi-node orbital mesh with workload isolation and failover.
Axiom Space is moving toward the role of orbital infrastructure landlord, targeting national security, commercial, and international customers with data center nodes riding inside the Kepler optical relay network. Together, Kepler and Axiom could normalize orbital data-center procurement before Starcloud's larger compute vision is proven, potentially locking in early customer relationships and compliance pathways.
Terrestrial substitutes and specialist startups
The largest commercial substitute for Starcloud in 2026 remains terrestrial GPU cloud. CoreWeave operates more than 250,000 GPUs across 43 data centers with over 3 gigawatts of contracted power capacity. Crusoe, which is simultaneously a Starcloud partner and a terrestrial competitor, markets an energy-first AI cloud and has 20 gigawatts of energy projects in development. Lambda Labs competes for the same AI infrastructure spend. These providers are adapting along the same dimensions Starcloud cites for orbit, vertical energy procurement, liquid cooling, and rapid modular deployment.
Among space-native specialists, Aetherflux is building an orbital data center constellation called Galactic Brain with a first commercial node targeted for early 2027, framed around national-security and power-resilience narratives that may provide a stronger path into defense budgets. Sophia Space is developing modular, passively cooled orbital compute under its TILE architecture and announced a commercial agreement with Kepler in April 2026 to deploy its software on Kepler satellites starting in late 2026. That is a sign that the thermal and cooling thesis Starcloud uses for differentiation is becoming common design language across the category. NVIDIA's March 2026 launch of standardized space-computing modules available to all of these players at the same time validates the market while compressing any moat based purely on access to cutting-edge hardware.
TAM Expansion
Starcloud's expansion logic runs in two directions: deeper into the space-native data processing market, where being in orbit has a clear advantage, and outward toward the larger terrestrial AI infrastructure market, where orbital economics still need to improve before they compete directly.
Space-native data processing
The most immediate TAM expansion is from general orbital compute into a dedicated infrastructure layer for the growing volume of data generated in space. Earth-observation satellites, SAR constellations like Capella Space, space stations, and autonomous spacecraft collectively generate terabytes of raw data daily that is expensive and slow to downlink. Starcloud's pitch is to process data at the source, downlinking only the insight rather than the raw file. NVIDIA's 2026 space-computing launch explicitly ties orbital AI to geospatial intelligence and autonomous space operations, which supports the case that compute should migrate toward where space-borne data originates.
This wedge creates nearer-term revenue that does not depend on orbital economics surpassing terrestrial cloud. The value proposition is architectural, compute-at-source for space data, rather than purely cost-based, so customers are paying for capability instead of waiting for launch costs to fall.
Sovereign and resilience infrastructure
Starcloud's positioning around secure, Earth-independent storage and sovereign cloud computing opens a second expansion path into government, defense, and regulated enterprise buyers. In-Q-Tel's backing indicates alignment with U.S. and allied public-sector demand for strategic compute autonomy. For these buyers, the value is not cheaper GPU-hours but jurisdictional separation, resilience against terrestrial infrastructure disruption, and off-world disaster recovery, use cases that can support premium pricing.
This customer segment also changes the mechanics of geographic expansion relative to terrestrial data center buildout. An orbital platform is inherently global, so serving a European government or an allied defense agency does not require building a local campus, it requires securing the right regulatory and telecom partnerships. As terrestrial AI infrastructure runs into local grid constraints and community resistance, the globally addressable nature of orbital compute becomes a structural advantage.
Hyperscaler capacity agreements
The longest-range TAM expansion is from niche orbital compute into large-scale capacity reservations with hyperscalers, structured more like energy offtake agreements than traditional cloud contracts. Reuters reported in March 2026 that Starcloud is working on binding energy offtake agreements with hyperscalers, which implies that future monetization may partially resemble infrastructure capacity reservation at scale. Global data center electricity consumption reached 415 TWh in 2024 and grew 17% in 2025; if terrestrial AI capacity remains grid-constrained, even a small share shift toward orbital infrastructure could support a very large business.
This expansion is contingent on launch economics. The CEO has pointed to frequent Starship operations, likely 2028-2029, as the threshold at which orbital power costs become competitive with terrestrial alternatives. The Crusoe partnership previews the distribution model for this phase: Starcloud provides the orbital infrastructure layer while established cloud operators package access for enterprise and hyperscaler customers through familiar interfaces and procurement workflows.
Risks
Launch cost dependency: Starcloud's economic thesis, cost-competitive orbital compute at scale, depends on heavy-lift reusable launch reaching roughly $500 per kilogram, a threshold tied to frequent Starship operations that the company's CEO does not expect before 2028-2029, which means the company must sustain itself on niche in-orbit processing revenue for several years before its core cost advantage materializes.
Vertical integration threat: If orbital compute becomes a real market at scale, SpaceX could bundle launch, Starlink optical networking, satellite manufacturing, and compute capacity into a single procurement motion that no pure-play orbital infrastructure startup can replicate, potentially capturing the category before Starcloud proves its economics across multiple missions.
Orbital hardware reliability: A single H100 demonstration on Starcloud-1 shows technical feasibility at small scale but does not validate that large GPU clusters can deliver multi-year commercial uptime under radiation, thermal cycling, and maintenance constraints that are harder to manage than swapping servers in a terrestrial rack.
News
DISCLAIMERS
This report is for information purposes only and is not to be used or considered as an offer or the solicitation of an offer to sell or to buy or subscribe for securities or other financial instruments. Nothing in this report constitutes investment, legal, accounting or tax advice or a representation that any investment or strategy is suitable or appropriate to your individual circumstances or otherwise constitutes a personal trade recommendation to you.
This research report has been prepared solely by Sacra and should not be considered a product of any person or entity that makes such report available, if any.
Information and opinions presented in the sections of the report were obtained or derived from sources Sacra believes are reliable, but Sacra makes no representation as to their accuracy or completeness. Past performance should not be taken as an indication or guarantee of future performance, and no representation or warranty, express or implied, is made regarding future performance. Information, opinions and estimates contained in this report reflect a determination at its original date of publication by Sacra and are subject to change without notice.
Sacra accepts no liability for loss arising from the use of the material presented in this report, except that this exclusion of liability does not apply to the extent that liability arises under specific statutes or regulations applicable to Sacra. Sacra may have issued, and may in the future issue, other reports that are inconsistent with, and reach different conclusions from, the information presented in this report. Those reports reflect different assumptions, views and analytical methods of the analysts who prepared them and Sacra is under no obligation to ensure that such other reports are brought to the attention of any recipient of this report.
All rights reserved. All material presented in this report, unless specifically indicated otherwise is under copyright to Sacra. Sacra reserves any and all intellectual property rights in the report. All trademarks, service marks and logos used in this report are trademarks or service marks or registered trademarks or service marks of Sacra. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any report is strictly prohibited. None of the material, nor its content, nor any copy of it, may be altered in any way, transmitted to, copied or distributed to any other party, without the prior express written permission of Sacra. Any unauthorized duplication, redistribution or disclosure of this report will result in prosecution.