Open-source models commoditize core technology
OpenAI
Open models shift value away from the model itself and toward distribution, product workflow, and proprietary data. Once a company can take a strong frontier model, distill its behavior into a smaller Llama or Mistral variant, and run it through an OpenAI compatible endpoint, the hard part is no longer getting raw intelligence. The hard part is owning users, context, evals, and the application layer where the model actually gets used every day.
-
The clearest pressure point is price. Research on fine-tuning shows teams like Ramp, Notion, and Databricks can use frontier model outputs to train smaller open models for narrow tasks, cutting inference cost by about 90% while improving speed and consistency on repetitive workflows.
-
This already changes buyer behavior. Developers increasingly mix providers by task, using one model for hard reasoning, another for fast cheap calls, and open models where privacy or cost matters. That makes it harder for any one API vendor to charge a permanent foundational tax.
-
Meta pushed this dynamic forward by releasing Llama 3.1 405B as an open model in July 2024, while OpenAI itself moved in the same direction with gpt-oss open weight models in August 2025. Once both frontier labs publish strong weights, model quality diffuses faster across the ecosystem.
Going forward, the winners are likely to be the labs and apps that turn models into sticky products, not the ones that rely on model scarcity alone. OpenAI can still win, but the durable moat looks more like ChatGPT, developer workflow, enterprise trust, and proprietary usage data than exclusive access to the best base model.