Quality-first search for AI agents
Will Bryk, CEO of Exa, on building search for AI agents
This reveals that Exa is building a better filter before it builds a bigger index. The near term goal is not to crawl every possible page, but to cover the parts of the web that matter most for AI workflows, like company pages, research papers, blogs, and news, where better retrieval quality matters more than raw page count. That is especially important for agent use cases, where one bad source can poison the whole answer.
-
Exa describes the useful web as a few billion high quality pages, not the full long tail of junk pages, spam, and thin SEO content. Its crawl strategy starts with that useful subset, then expands over time, because retrieval quality is the product, not comprehensive coverage on day one.
-
Its quality criteria are broader than social sharing. Shares can be one signal that a document is worth reading, but the company is building learned quality models that can score pages and filter bad ones before retrieval, instead of relying on manual rules or human labeling.
-
That approach lines up with how customers use the product in practice. Users value Exa for niche, semantically matched results and large result sets, while customers like Ecosia route only selected complex queries to Exa, which shows that high precision on high value searches matters more than trying to answer every query type at once.
From here, the likely path is widening coverage without giving up the quality filter. As AI agents take on more research and workflow tasks, the winning search infrastructure will be the system that can add more of the web while still suppressing bad pages, ranking by meaning, and keeping retrieval clean enough for downstream models to trust.