Few Providers Will Dominate AI Search

Diving deeper into

Will Bryk, CEO of Exa, on building search for AI agents

Interview
assuming that data is accessible, I expect there to be a couple of winners, not a huge number.
Analyzed 3 sources

AI search looks structurally more like cloud infrastructure than like consumer apps, where scale, indexing depth, and model training costs push the market toward a few providers that can serve everyone else. In practice, customers are not buying a chatbot, they are buying a web retrieval engine that can return huge result sets, extract page content, and stay fresh enough for agents and data pipelines that run all day.

  • The clearest scale advantage is on raw retrieval. One Exa power user runs 5,000 searches a day, pulls 50,000 to 100,000 results, and values Exa mainly for returning up to 10,000 results per query with full text, which is hard to replicate with lighter search APIs.
  • That creates a split between infrastructure winners and feature winners. Parallel can do better agent style synthesis in some cases, but buyers still choose Exa when they need deep coverage, freshness, and machine readable content rather than just a written answer.
  • The distribution side is likely to stay concentrated too. Ecosia routes roughly 500,000 daily AI overview queries through Exa, spends about $300,000 per month, and sees search quality across vendors converging, which makes scale economics, pricing, and integration speed more decisive than small quality gaps.

The next step is a search stack where a few large providers become the default back end for agents, copilots, and answer engines, while publishers optimize less for blue links and more for being crawlable, extractable, and citable inside machine generated answers. The winner will be the provider that best combines index scale, low latency, and reliable content access.