Model Orchestration for Writing Tasks
Dave Rogenmoser, CEO and co-founder of Jasper, on the generative AI opportunity
The key strategic point is that Jasper is not really selling a single model, it is selling the best output for each writing job. In practice, one action might use a base model with careful prompting, another might use a fine tuned model trained on template level feedback, and another might run multiple models in sequence to clean up or reshape text. That is how an app layer company turns shared foundation models into a differentiated product.
-
Inside Jasper, different actions already route to different models, and some flows chain several models together. That matters because writing a Facebook ad, a blog intro, and a brand rewrite are different tasks with different quality and cost targets, so the best system is a router, not a monolith.
-
Fine tuning helps only when the task is narrow and the data is clean. Jasper gathers ratings across 50 plus templates and uses that feedback to improve specific templates, while Gamma found prompt engineering often worked well enough without fine tuning, showing that narrower products can get far with orchestration before custom training.
-
The broader pattern across AI apps is moving from one generic writing box to workflow specific systems. Copy.ai shifted from single asset generation toward multi step GTM workflows and customer specific models, because enterprises pay for a tool that completes a job inside their stack, not just for access to a general model.
This points toward AI applications becoming model orchestration layers tuned around concrete tasks, budgets, and workflows. As foundation models get cheaper and more interchangeable, the winning products will be the ones that know when to use a big model, when to use a smaller custom one, and how to wrap both inside software that fits the customer’s exact job.