Continual Learning to Replace Pretraining
Core Automation
This is a bet that the current frontier AI recipe is nearing a ceiling, and that the next big advantage will come from systems that improve while they are being used, not just from training bigger static models. In concrete terms, Core Automation is trying to replace the standard loop of gather giant dataset, train huge model, freeze checkpoint, then patch it later, with systems that keep absorbing experience from real research work and use that to automate more of the lab itself.
-
The practical upside is research speed. Core Automation already frames its own lab as the first deployment environment, where agents do literature review, experiment setup, evaluation runs, and debugging, then human feedback gets folded into the next system version. If that loop works, the product is not just a model, it is a machine for running more experiments per researcher.
-
The closest small lab comparable is Sakana AI. It is also pushing beyond the standard transformer plus RL stack, using evolutionary search, automated research pipelines, and alternative memory systems. That makes the market context clear, there is a real cohort of labs looking for post transformer learning methods, not just one isolated contrarian thesis.
-
The competitive pressure comes from giants that can absorb any successful idea fast. Anthropic and other frontier labs still monetize the current large scale pretraining model stack through APIs and products with distribution already in place. So for Core Automation, a new learning algorithm only matters if it compounds into a faster internal research loop before incumbents fold similar methods into much larger compute budgets.
Where this heads next is toward adaptive models as products. If continual learning and more efficient post transformer architectures work in the lab, they can expand from internal research automation into enterprise research tools, knowledge work software, and eventually model layer infrastructure that is cheaper, lighter, and better at improving after deployment.