Competing Agent Platforms Squeezing Margins
Manus
This risk says Manus does not fully control the most expensive and most substitutable layer of its product. Manus sells long running agent jobs that bundle model tokens, browser actions, code execution, and third party tools into one credit priced workflow, so if Anthropic or OpenAI ship similar agents directly, Manus gets squeezed from both sides, paying them for inference while competing against their bundled end product.
-
Manus pricing is tightly linked to underlying usage. A task can burn 200 to 900 credits, with costs driven by LLM tokens, VM compute time, and external API calls. That means higher model prices or worse commercial terms flow quickly into gross margin unless Manus raises prices or caps usage.
-
The model labs are already moving up the stack. OpenAI added deep research, browser control, and code execution into ChatGPT Agent, while Anthropic expanded Claude Code with sub agents and web search. That narrows the product gap between model provider and agent app.
-
This pattern has already shown up elsewhere. LangChain faces the same pressure as OpenAI, Microsoft, AWS, and Google fold agent frameworks into their own APIs and clouds. In AI, infrastructure providers often turn popular downstream features into native product, which compresses the margins of wrappers built on top.
The next step is a split between thin wrappers and durable workflow products. Agent companies that keep relying on frontier labs for core reasoning will need proprietary integrations, better execution infrastructure, or vertical data loops to defend pricing. Otherwise the market will look more like search and coding, where core capabilities keep getting absorbed into the foundation layer.