Model Customization Becoming Infrastructure
Thinking Machines
The key issue is that model customization is quickly becoming infrastructure, not a unique product layer. Bedrock, Vertex AI, Azure AI Foundry, Hugging Face, and NVIDIA now all let enterprises pick a base model, tune it on their own data, and push it into production without committing to one lab’s model family. That makes Thinking Machines’ customization layer a wide market entry point, but it also puts it into a crowded control plane battle where flexibility is the main feature.
-
The hyperscalers package the full workflow inside existing cloud accounts. Bedrock supports fine tuning, reinforcement fine tuning, custom model import, and managed deployment. Vertex AI Model Garden lets teams discover, tune, and deploy Google and third party models. Azure AI Foundry supports fine tuning and serverless deployment across its model catalog.
-
Hugging Face attacks from the open source side. A team can train with AutoTrain, publish weights to the hub, and deploy them through Inference Endpoints on managed cloud infrastructure. That matters because open source lowers switching costs and gives enterprises a path to own the model artifact instead of renting one provider’s hosted API forever.
-
NVIDIA and specialist platforms push the same wedge further upmarket. NVIDIA AI Foundry is built around turning enterprise data into a custom model and then shipping it as a NIM microservice. Fireworks AI shows how large this neutral layer can get, reaching an estimated $130M in revenue by May 2025 by selling fine tuning and deployment outside a single model vendor’s stack.
The market is heading toward a split where base model labs win on model quality, while the orchestration layer wins on choice, governance, and deployment workflow. Thinking Machines can expand far beyond a single model if it becomes the place enterprises manage customization across providers. The prize is large, but the winning product will look less like a lab add on and more like core cloud infrastructure.