VAST Default Storage at Scale
Renen Hallak, CEO of VAST Data, on AI agents creating infinite storage demand
This signals that VAST has become the default storage layer once an AI cloud gets big enough that old enterprise storage stops feeding GPUs fast enough. At small scale, teams can still piece together disk, file systems, and separate data tools. At neocloud scale, they need one system that can serve files, objects, and database queries from the same flash cluster, which is exactly where VAST is positioned.
-
The real threshold is operational pain, not customer count. When a provider starts running large shared GPU fleets, many users hit the same data at once, and the bottleneck shifts from compute to storage throughput, metadata lookup, and data movement between systems. VAST sells into that break point with average new customer commitments above $1M.
-
This fits the shape of the GPU cloud market. Larger providers like CoreWeave serve customers reserving 1,000s of GPUs on long term contracts, while smaller clouds and developer focused platforms operate with lighter, more flexible stacks. The biggest operators are the ones most likely to standardize infra components that remove bottlenecks across many tenants and workloads.
-
Standardization matters because VAST is not just selling a faster storage box. It is trying to replace multiple layers, storage, catalog, SQL access, and data processing, so once a cloud adopts it as the shared data plane, customer workloads can move between providers or on prem deployments without changing the underlying data system.
As AI clouds consolidate, the winners are likely to keep converging on a small number of infrastructure standards, and VAST is positioned to be one of them. That would turn neocloud growth into distribution, with each large cloud deployment pulling more enterprise AI training, inference, and agent workloads onto the same underlying data platform.