Sacra Logo

How does Jasper's fine-tuning process add value to users' content despite trained algorithms like GPT-3 being widely available?

Dave Rogenmoser

Co-founder & CEO at Jasper

Today, you’re seeing more conversations around whether we can do mostly the same as what we’re doing now with a model that's maybe 10% the size of GPT-3. What we’ve found is that yes—under specific situations, and perhaps with a narrower set of use cases—you don't need these huge models. 

They’re expensive to use, they're expensive to work with, they're hard to train, and if you can do what you want to do with a smaller model, that’s going to be much better. 

I can't definitely say that models are going to start getting smaller. Models will probably still continue to get bigger as we see what they can really do, but it's definitely not a given that bigger is better. By large, if we can do it with a smaller model, we're going to do that. But obviously, GPT-3 hit something with a big model that works really, really well.

Find this answer in Dave Rogenmoser, CEO and co-founder of Jasper, on the generative AI opportunity
lightningbolt_icon Unlocked Report