Automating research feedback loops

Diving deeper into

Core Automation

Company Report
proving a version of the same meta-loop Core Automation is pursuing: use AI to improve algorithms and research velocity
Analyzed 5 sources

The real threat from DeepMind is not just a better research agent, it is a closed feedback loop where algorithm ideas can be generated, tested inside Google’s own systems, and then pushed back into the next round of model and infrastructure improvement. AlphaEvolve has already been used to improve Google data centers, chip design, AI training, and external scientific and business workloads, which shows this loop can move from lab demo to production quickly.

  • Core is trying to automate the repetitive parts of research, like literature review, experiment setup, evaluation, and debugging, so the lab itself becomes the first customer for its own tools. That only compounds if the system can keep improving from each research cycle.
  • DeepMind has already shown the practical version of that idea. Its 2025 AlphaEvolve launch said the system improved Google’s data centers, chip design, and AI training, including training the models behind AlphaEvolve itself. The 2026 update expands that proof into genomics, power grids, and other external use cases.
  • OpenAI and Anthropic are proving adjacent pieces of the same race. OpenAI turned deep research into a mass distribution product inside ChatGPT, and Anthropic reported that a lead agent with subagents beat a single agent baseline by 90.2% on its internal research eval. That means Core is competing against companies that can both invent and ship faster.

This market is heading toward labs that can turn AI into a machine for making better AI. The winners will be the ones with the fastest loop from idea, to test, to deployment, to learning. That favors companies with live production environments, large compute budgets, and direct product distribution, which is why Core’s path depends on making its internal loop unusually efficient very early.