OpenAI and Anthropic productizing research automation
Core Automation
The real threat is not better models alone, it is faster productization on top of those models. OpenAI already ships analyst style research inside ChatGPT, where deep research searches, analyzes, and synthesizes hundreds of sources into a cited report, and Anthropic has built a research product where a lead agent spins up parallel subagents to investigate different parts of a question. That means the frontier labs can turn model gains into end user workflow gains immediately, at massive scale.
-
OpenAI has the strongest built in distribution. Deep research launched in ChatGPT in February 2025, later expanded across paid tiers, and by February 2026 added app and MCP connections plus trusted site restrictions. That gives OpenAI a ready made surface to drop research automation into existing consumer and enterprise seats.
-
Anthropic is proving the same pattern from a different angle. Its engineering writeup says Research uses a planner agent that creates parallel agents with separate context windows, and later product docs show Claude Code delegating work to specialized subagents automatically. The capability is not just a demo, it is becoming a reusable product primitive.
-
The closest analog is what happened in coding. Anthropic turned Claude into the backbone of AI coding and OpenAI has been moving toward vertically integrated full stack products. Once a frontier lab finds a high value workflow, it can bundle the model, interface, and distribution before startups in that workflow have time to build a moat.
Going forward, research automation is likely to look less like a standalone app category and more like a feature frontier labs keep absorbing into broader work surfaces. The winners will be companies with proprietary data, regulated workflows, or deep system integration that a general purpose lab cannot copy by shipping one more tab in its core product.