Core Automation's narrow timing risk
Core Automation
This risk is really about speed asymmetry. Core Automation has roughly $100M of confirmed funding and no shipped product, while the labs it needs to beat already have giant compute budgets, distribution, and live products for research and coding workflows. That means Core has to prove a post transformer learning advantage fast enough to matter before larger labs fold similar ideas into systems they are already selling at scale.
-
Core is still a lab first company. It is using AI agents to automate paper reading, experiment setup, evaluation, and debugging inside its own research loop, but as of May 2026 it has no API, pricing page, or commercial product. Capital is therefore being spent against research and compute before customer revenue starts to offset burn.
-
The incumbents are not standing still. OpenAI and Anthropic are already large commercial model companies with estimated annualized revenue of $25B and $30B respectively by early 2026, and Core’s own market map points to both already productizing research style and coding style agents. If continual learning works, they have the cash and deployment surface to absorb it quickly.
-
There is a middle case where the research is directionally right but the value is captured elsewhere. Sakana shows there is investor appetite for small labs pursuing alternatives to transformer scaling, but enterprise buyers often end up purchasing from productized vendors like Hebbia, which sell reliability, security, and workflow integration rather than raw research novelty.
The path forward is a race to turn internal compounding into an external moat. If Core can show that its systems learn faster from deployment and let a small team produce frontier quality work with far fewer people and GPUs, it can become more than a research experiment. If not, the likely future is that larger labs and enterprise agent platforms capture the commercial layer first.