DayOne Enables 130kW GPU Cabinets

Diving deeper into

DayOne

Company Report
This enables customers to deploy dense GPU clusters for AI workloads that conventional air-cooled data halls cannot accommodate.
Analyzed 5 sources

High density liquid cooling turns the data hall itself into part of the AI product, not just the real estate around it. Modern GPU racks can push past 100kW, which is beyond what conventional air systems can reliably pull out of a room without extreme airflow and wasted power. DayOne’s direct to chip and rear door setup lets customers pack more training and inference hardware into each cabinet, so they buy megawatts and get usable AI capacity faster.

  • Air cooling starts to break when too much heat is concentrated in one rack. NVIDIA notes Blackwell racks reach about 120kW, and cooling that with air alone becomes impractical. Vertiv maps rear door and liquid to chip designs to roughly 140kW and beyond, which lines up with DayOne’s 130kW per cabinet design point.
  • This changes the customer workflow. Instead of spreading GPU servers across more floor space to stay within thermal limits, a tenant can keep a cluster tighter, with shorter network links and more compute per megawatt leased. That matters for large training jobs where thousands of GPUs need to behave like one system.
  • The closest comparable in the research set is Nscale, which also builds liquid cooled AI factories above 100kW per rack. The difference is that Nscale sells compute services on top of owned infrastructure, while DayOne mainly sells wholesale powered capacity, which makes cooling density a core part of its pricing power with hyperscalers.

The next step is a split market. Legacy halls will keep serving lower density enterprise workloads, while new AI campuses win the biggest deployments by proving they can deliver 100kW plus racks quickly and at scale. That makes cooling architecture, prefab delivery, and power access the main determinants of who captures the next wave of GPU cluster demand.