LLMs Replace Condition-Based Chatbots
Eoghan McCabe & Des Traynor, CEO and CSO of Intercom, on the AI transformation of customer service
This reveals how limited early chatbots were, and why the winners in customer service are shifting from workflow builders to systems that can actually understand language. Drift’s first product worked like an on site phone tree. A team mapped buttons, branches, and routing rules by hand, then sent visitors to sales, support, or booking flows. That worked best for narrow, repetitive web conversations, but it broke down once questions became messy, open ended, or required real context from docs and prior tickets.
-
Drift’s core use case was conversational marketing, not deep support automation. Companies added a JavaScript snippet to their site, watched visitor behavior, then used playbooks to qualify leads, book meetings, and route prospects to reps. The product was strongest when the goal was steering a website visitor into a sales motion.
-
The practical problem with condition based bots was upkeep. Every branch had to be predefined, every answer written in advance, and every new issue added manually. Intercom’s next generation Resolution Bot improved this with fuzzy matching, but still depended on curated answers and only reached around 50% resolution in many cases.
-
The strategic jump with LLM based bots is that they reverse the setup burden. Instead of building a tree, a company can point the bot at its help center and conversation history, then let it interpret free form questions. That is why newer agents are measured on containment and resolutions, not on how many flows a team configured.
From here, customer service software keeps moving away from scripted routing and toward full resolution. That favors platforms with the richest support context, docs, inbox history, CRM data, and workflow actions, because the next competitive step is not better branching logic, it is an agent that can understand the issue and finish the job.