AI free tiers enable token abuse
Michael Grinich, CEO of WorkOS, on AI startups getting enterprise-ready at launch
The key point is that AI free tiers are really compute giveaways, so abuse shows up as people extracting expensive model output, not just creating fake accounts. In Cursor’s case, attackers were not mainly trying to access the coding product as intended. They were opening new accounts to consume chat inference for outside jobs like summarization and long form text generation, which drives real token cost on every response. That is why identity products like WorkOS are moving from simple login plumbing into fraud controls at the sign in layer.
-
This attack starts before the API. Radar watches the login flow, using device fingerprints and behavior signals to spot one device cycling through many accounts, or many devices hitting one account. That is useful when the abuse is account creation and account rotation, not just high request volume from one IP.
-
The business model makes this painful. Cursor’s free plan includes limited agent requests, and its paid plans are explicitly tied to model usage and inference cost. When a bad actor burns through free accounts for unrelated workloads, the company still pays the model bill while getting no real user conversion.
-
This is a broader shift in product led SaaS. Older free trial abuse often meant storage, bandwidth, or collaboration seats. In AI products, the scarce resource is output tokens. That pulls fraud prevention closer to identity, because the fastest way to cut loss is to stop suspicious users before they start generating expensive responses.
Going forward, AI apps will treat sign up, authentication, and fraud as one system. The winners will be the products that can let real users into the app instantly, while quietly blocking recycled devices, scripted account creation, and other patterns that turn a free tier into an open compute subsidy.