LLM Agents Democratize SaaS Support
Eoghan McCabe & Des Traynor, CEO and CSO of Intercom, on the AI transformation of customer service
This was the wedge that kept earlier chatbots from becoming a real software category for most B2B companies. Old bots only worked when support volume was huge and repetitive enough to justify weeks of scripting, intent training, and upkeep. Most SaaS companies had lower ticket volume, broader product surface area, and more varied questions, so the setup cost overwhelmed the savings. LLM based agents changed that by answering from existing docs and handling open ended questions without forcing users into rigid flows.
-
The dividing line was workflow shape. Consumer apps like delivery and ride sharing saw a small menu of recurring issues, so phone tree style bots could pay off. SaaS products had many features, account states, and edge cases, which made scripted trees brittle and expensive to maintain.
-
Second generation bots could reach about 50% resolution, but only after heavy manual programming. Third generation agents flipped the economics by using a company’s help center and conversation context directly, which is why Intercom could move from bots as a feature to Fin as a product line priced per resolved ticket.
-
This is also why the market split. Legacy help desks and chatbot vendors grew around routing and scripted deflection, while newer players like Fin, Sierra, and Decagon compete on how well the agent can actually solve the issue, take actions, and plug into the existing support stack.
Going forward, the winners in customer service will be the platforms that turn support from labor management into resolution infrastructure. As setup falls toward zero and autonomous resolution keeps rising, AI agents will spread from chat into voice, proactive onboarding, and cross platform deployments, pulling more budget away from seat based help desks and outsourced support teams.