Customers Build Their Own Interviewers

Diving deeper into

Joe Kim, CEO of Office Hours, on the end of crowdwork

Interview
customers are going to prefer to build their own AI models for this use case.
Analyzed 6 sources

This points to AI interviewing becoming an infrastructure layer, not a durable product moat. In expert research, the hard part is not turning speech into questions, it is encoding a firm's own judgment about what to ask, when to push, and which answers matter. That favors buyers with large transcript archives and established analyst workflows, while leaving room for platforms that reduce scheduling friction, source experts, and manage compliance.

  • Tegus showed the strongest version of the data moat argument, a large transcript library sold through subscription search. But even there, former operators describe question quality as highest when tied to a live investment thesis, which means the real asset is the customer's research process, not just the raw transcript corpus.
  • Office Hours is positioned around workflow pain that customers do not want to rebuild. It finds and vets experts, handles compliance, and lets interviews happen asynchronously so an expert can start, pause, and resume. That is a concrete advantage even if the interviewing model itself becomes interchangeable.
  • The closest analogy is AlphaSense and Tegus, where proprietary content matters most when paired with distribution and workflow software. AlphaSense built around search on top of owned and licensed content, while Office Hours is betting that in primary research, access and execution will matter more than owning the best interviewing model.

The market is heading toward customer tuned agents sitting inside broader research stacks. Platforms that survive will look less like sellers of a magic interviewer and more like operating systems for primary research, combining expert supply, compliance, scheduling, transcripts, and synthesis while letting large customers bring their own models.