Deep Research Agents Prioritize Planning
Product manager at Cohere on enterprise AI search infrastructure and deep research agents
The real product advantage in deep research is not faster search, it is better decomposition of a messy question into the right subquestions in the right order. In practice, Parallel inside Manus starts by building a frame for the job, then fills it in segment by segment, which is why it can produce detailed tables, tradeoff breakdowns, and broad context instead of a thin answer. That extra planning step also explains why good reports can take 10 to 15 minutes rather than seconds.
-
This is the difference between search and research. A search API mostly returns relevant pages. A deep research agent first decides what has to be investigated, like rent, schools, taxes, and neighborhood tradeoffs in a relocation case, then runs those threads and stitches them into one report.
-
The same pattern is showing up across the market. OpenAI describes deep research as a system for multi step exploration across many sources, and Manus later pushed this further with Wide Research, which fans work out across up to 100 parallel sub agents. The category is moving toward research planners, not just better retrieval.
-
The bottleneck is shifting from finding web pages to finding trustworthy corpuses. The interview points to medicine, finance, and law because open web search increasingly surfaces recycled SEO content. Products with direct access to journals, filings, and proprietary databases can turn that same planning loop into more reliable outputs.
Going forward, the winning deep research systems will look less like generic web search and more like an analyst that knows which database to open first. As model labs and agent products copy the planning layer, differentiation will move to domain specific data access, workflow integration, and the ability to turn long running research into dependable work product inside real enterprise tasks.