Tegus Built a Transcript Factory

Diving deeper into

Tegus

Company Report
Traditional financial data providers are introducing expert call add-ons but lack the transcript scale and compliance infrastructure Tegus developed over nearly a decade.
Analyzed 6 sources

This claim matters because Tegus built a content factory, not just an expert call feature. Traditional data terminals are built to deliver numbers, filings, and news in rigid fields. Tegus built an operating system for sourcing experts, running recorded calls, checking compliance, turning each call into reusable text, and then linking that text back to companies, sectors, and models. That is why incumbents can launch add ons faster than they can match Tegus's depth.

  • Tegus monetized the library, not mainly the call. One former engineering leader described the core asset as the transcript dataset, with calls priced near cost, around $300 to $400, while a single library seat targeted roughly $25K annually. That model pushed the company to maximize reusable transcript volume and searchability over one off call economics.
  • Scale came from years of workflow tuning. Tegus ran a large analyst operations team focused on expert discovery, outreach, scheduling, transcription, tagging, and turnaround speed. Former operators compared the benchmark not to Bloomberg or FactSet, but to expert networks like GLG and Guidepoint, because winning required both better content quality and tighter execution under strict compliance rules.
  • Incumbents are catching up, but from a different starting point. FactSet has invested heavily in APIs, AI summarization, auditability, and workflow integration, but its historical strength is structured data and trusted delivery into client systems. AlphaSense bought Tegus for $930M because the missing piece was proprietary expert content, first above 100,000 transcripts in 2024 and later above 200,000, not just better search.

The market is moving toward bundled research stacks where public documents, private transcripts, models, and AI sit in one interface. That favors companies that already own proprietary qualitative data and the machinery to keep producing it. Over time, the advantage will shift further toward platforms that can both generate fresh expert content at scale and pipe it directly into investor workflows through summaries, APIs, and agent style tools.