Search as infrastructure for AI agents
Will Bryk, CEO of Exa, on building search for AI agents
This was the moment Exa stopped being a better search box and became infrastructure for the agent economy. Once builders started asking for an API right after ChatGPT launched, it showed that LLM apps were not just generating text, they were missing a live fact layer. Exa fit that gap by giving apps a way to fetch current web pages, pull full text, and ground answers in fresher, higher quality sources than a model could hold on its own.
-
The early users and the later AI customers were doing almost the same job. Both wanted niche, high quality information that keyword search missed. Exa’s semantic search was especially useful for vague or multi modifier queries where the goal was to find the right set of pages, not just pages with matching words.
-
In practice, this demand turned into machine scale usage, not human scale usage. One Exa based pipeline runs 5,000 queries a day and pulls 50,000 to 100,000 results for data creation. Another customer uses Exa on roughly 500,000 daily queries to generate AI overviews inside a consumer search engine.
-
The product split in this market is becoming clearer. Exa is strongest when a customer wants lots of raw results and full page content to feed its own agents or ranking logic. Parallel and similar tools are stronger when the buyer wants a slower, more packaged research workflow with synthesis built in.
This points toward a stack where more AI products treat search like a core dependency, similar to payments or cloud storage. The winners will be the providers that can supply fresh coverage, reliable extraction, and specialized data streams at API scale, because every useful agent eventually needs to look outside its own model weights and verify against the live world.