Manus and Parallel prioritize depth

Diving deeper into

Product manager at Cohere on enterprise AI search infrastructure and deep research agents

Interview
Manus and by extension Parallel, take much longer. It takes a long time to get an output. However, the level of depth is also unparalleled.
Analyzed 3 sources

Long wait times are the visible cost of a research system that is doing real planning before it starts writing. In practice, Manus with Parallel is not just fetching a few pages and summarizing them. It first maps the problem, breaks it into subquestions like rent, schools, taxes, or medical evidence, then gathers sources and assembles tables and tradeoffs. That extra orchestration is why a meaty report can take 10 to 15 minutes, and why the output feels closer to analyst work than chat search.

  • The product behavior described here is multi hop research, not fast answer retrieval. The Cohere product manager contrasts Tavily as a narrower search API, while positioning Parallel as broader research infrastructure for AI agents. That means more steps, more browsing, and more synthesis before the final draft appears.
  • This also matches how users are shifting from one query at a time to running many research threads that cover different angles. In finance, teams increasingly launch separate deep research tasks on supply chain risk, regulation, and competitors, then combine them into one view. Depth matters because the job is comparing interacting variables, not finding one fact.
  • There is a direct economic tradeoff underneath the latency. Manus prices tasks through credits that map to model tokens, VM compute, and third party API calls, so longer, deeper runs are literally consuming more infrastructure. The system is spending more time because it is doing more work, not because the interface is inefficient.

The next step is faster depth rather than shallow speed. Manus has already added Wide Research with up to 100 parallel sub agents, and the strongest products in this category are moving toward combining deeper planning with more parallel execution and domain specific data sources. That points to a market where the winning agent is the one that feels thorough enough for real work without making users wait a quarter hour every time.