Custom Chips Erode Nvidia Cloud Demand
Fluidstack
The real risk is not that demand for AI compute disappears, but that the most valuable workloads get pulled into chip specific stacks that bypass generic Nvidia rental clouds. Fluidstack wins today by packaging Nvidia clusters fast, with bare metal access, high performance networking, and large private cloud contracts. If top customers can train or serve models more cheaply on TPUs or Trainium, some of that spend shifts from renting standard GPU clusters to buying into Google or AWS specific software and hardware rails.
-
Fluidstack is already exposed to this shift from both sides. Its core business was built around renting Nvidia GPUs from partner data centers, but it has also partnered with Google to host TPUs in third party facilities. That shows custom chips are no longer confined to hyperscaler owned campuses, they are starting to leak into the same external infrastructure footprint where GPU clouds compete.
-
The economic wedge is simple. Fluidstacks private cloud model works because customers sign 2 to 3 year contracts for dedicated GPU clusters, often with 25% to 50% paid up front. If Trainium2 or TPU pods deliver meaningfully lower cost per unit of model training or inference, the first thing that comes under pressure is the premium customers will pay for dedicated Nvidia capacity.
-
This is already how the market is segmenting. CoreWeave and Crusoe are scaling around huge Nvidia backed data center buildouts, while hyperscalers are pairing custom chips with managed software, model tooling, and enterprise distribution. The more AI buyers want a full stack that includes chips, training frameworks, and inference software together, the less room there is for standalone GPU resellers that mainly differentiate on speed of provisioning.
The next phase of the market is likely to split into two lanes. One lane is very large, chip specific platforms run by hyperscalers. The other is independent infrastructure providers that stay relevant by being multi architecture, sovereign, or operationally faster than the big clouds. Fluidstacks path is to evolve from Nvidia rental broker into a broader AI infrastructure operator that can host whatever accelerator customers want.