From Infrastructure to Intelligence Sovereignty

Diving deeper into

$100B sovereign AI boom

Document
that dependence deepened from infrastructure to intelligence itself
Analyzed 3 sources

This marked the moment countries stopped relying on American companies just to store and run software, and started relying on them to think. Cloud dependence meant renting servers from AWS, Azure, or Google. Intelligence dependence means routing government and enterprise workflows through a tiny set of mostly U.S. model labs, with ChatGPT making frontier AI a mainstream utility and concentrating power further up the stack.

  • The shift is visible in India. Even while backing a $1.1B IndiaAI Mission and local model efforts like Sarvam, key public systems such as CoWIN, DigiYatra, and DigiLocker still run on AWS. Many countries are trying to build national AI on top of foreign cloud foundations.
  • Europe has pushed furthest to break that pattern by backing vendors like Mistral, which sells private and on premise deployments to governments and industrials and reached $400M ARR by February 2026. That is what sovereign AI looks like in practice, local models plus local deployment and support.
  • The real strategic tension is that even sovereignty spending can reinforce U.S. control. American clouds now offer localized sovereign products, so governments can satisfy residency rules while still depending on U.S. software, operations, and upgrade cycles underneath.

The next phase is a race to replace borrowed intelligence with domestic stacks that include models, deployment, and compute. The winners will be labs that are not just politically local, but good enough and cheap enough to displace OpenAI and the hyperscalers in real government and enterprise workloads.