VAST pivots from storage to AI stack

Diving deeper into

Renen Hallak, CEO of VAST Data, on AI agents creating infinite storage demand

Interview
Two years ago, VAST DataStore only was probably 70%. Today it has flipped.
Analyzed 4 sources

The mix shift shows VAST is moving from a storage vendor into a fuller AI infrastructure control point. Two years ago most customers bought only the file and object storage layer, while today roughly 70% use a bundle of three or more products. That change maps to VAST adding a database for vector embeddings and a data engine that triggers inference and other jobs, then selling that broader stack into fast growing AI labs, neoclouds, and enterprises building internal AI apps.

  • The product ladder became a bundle because the workflow became one system. In VAST InsightEngine, a document comes in through DataEngine, embeddings are stored in DataBase, and users query it through a chatbot with role based access. That is not storage plus extras, it is an application stack sold as one solution.
  • The customers pushing this shift are the newest and fastest growing ones. Early adopters often started with DataStore only because that was the only product available. AI labs and AI clouds more often adopted the full stack from day one, and enterprise AI deployments also tend to consume multiple layers together.
  • This also changes who VAST competes with. At the storage layer it is compared with Weka, DDN, Pure, Dell, and NetApp. Once customers buy the bundle, VAST is also replacing pieces of the database and orchestration stack, including workloads that might otherwise sit on older data warehouse architectures.

Going forward, more of VAST's growth should come from share of stack, not just share of storage. As AI workloads spread from model builders into enterprises and cloud providers, the winning vendors will be the ones that sit in the middle of every read, write, query, and policy check. The bundle mix suggests VAST is already moving into that role.