Scaffolding LLMs for B2B Support

Diving deeper into

Intercom's $250M/year AI bet

Document
Building a B2B chatbot from an LLM’s non-deterministic output requires major software and data scaffolding
Analyzed 4 sources

The hard part is not generating an answer, it is turning a probabilistic model into a support system that knows when to answer, what private facts it is allowed to use, and when to hand the case to a human. Intercom’s edge comes from already owning the inbox, the customer record, and the workflow layer that lets Fin read context, draft or send replies, and route messy cases into the same support queue humans already use.

  • A raw LLM can answer from public help docs, but B2B support usually needs account context, like plan tier, order status, or user history. Intercom describes the target state as combining the user’s question, CDP fields, and the knowledge base before producing a reply, which is the core scaffolding problem.
  • The software layer around the model matters as much as the model itself. Intercom highlights rules for high risk questions, confidence thresholds, source handling, reporting, agent assist, and handoff to humans. That is why the product is closer to an AI control tower on top of a help desk than to a simple GPT wrapper.
  • This is also why integrated players have an advantage over stand alone bots. Customer.io makes the same point from the messaging side, that owning the data pipeline reduces latency, broken integrations, and operational debugging. In support, the same logic applies more strongly because wrong answers create trust and liability problems, not just missed sends.

The next phase is a split between full AI support platforms and thin agent overlays. As resolution quality rises and more actions move from answering questions to updating accounts, issuing refunds, or changing subscriptions, the winners will be the products that combine system of record data, orchestration, and human escalation in one loop.