Engineering leader at Tegus on building a data platform for expert interviews

Jan-Erik Asplund
View PDF
None

Background

We spoke with an engineering leader at Tegus who helped navigate the company's transition from an expert network to a data platform for investment research.

The conversation explores how Tegus differentiated from competitors by building a transcript library, the operational challenges of scaling expert interviews, and how AI is transforming the value proposition of expert networks.

Key points via Sacra AI:

  • Tegus positioned as a data company, not a services company, building a transcript library of expert calls that became their primary asset—with subscription access to the library at $25K per seat driving revenue more than the $300-400 per-call fees. "That was one of the big shifts that Tegus embodied—they saw themselves as a data company, not a people company, which GLG thinks of itself more like a Rolodex company... The asset was the data we were collecting... Our real revenue generator was the subscription library, the subscription product."
  • The biggest operational friction in expert networks was discovery—analysts spent half their time finding experts on LinkedIn and getting them to respond—so Tegus built tools to make this process more efficient while focusing on the demand side rather than building relationships with experts. "The analysts spend probably at least half of their time, if not more, just literally finding the expert to actually be on the call. Trying to make that process of mostly just scraping through LinkedIn more efficient was a challenge... They spent more time focusing on the demand side, not the supply side."
  • As AI flattens the information edge from public data, expert knowledge becomes more valuable; Tegus acquired multiple companies (BamSEC, Canalyst, Fincredible) to build a comprehensive investment data platform that could extract structured insights from expert calls and link them to models, filings, and other datasets. "AI has basically—the information age was about making information easier, and it felt like that the last 20 years was about making information easier. AI just totally flattened that. There's no information edge—any information that's public is super easy at your fingertips to get answers to... What that actually makes experts more valuable. The only information that isn't—the stuff that's in people's heads that isn't written down on the internet."

Questions

  1. I imagine people didn't think of expert interviews as a big tech or a software play when you started at Tegus. So I'm curious how you approached the opportunity and challenge at Tegus as an engineering leader?
  2. When you think about expert interviews as data, can you break that down for me? Was there a strategy from the beginning to develop other kinds of data besides just having this expert transcripts library?
  3. In terms of the scaling challenges at that point in time, what would you say were the biggest scaling challenges overall? And how did they impact your engineering org?
  4. If you had to single out a specific point of friction, is there any specific point of friction that comes to mind—even if it's anecdotal?
  5. An expert transcript business is analogous to a classic two-sided marketplace situation. Did you do something to get the supply side—the experts—built up instead of waiting for a customer to want to connect to someone in an industry? Go out and interview a bunch of likely interview targets to fill up inventory, so to speak?
  6. So in a marketplace sense, this was focusing more on the demand side versus the supply side.
  7. How much was the subscription product for context?
  8. You mentioned earlier with AI now, some of this stuff is easier. At the late stage of your trajectory, how did you think about AI? And then what would you do differently today with the technology that's available?
  9. I was told that one of AlphaSense's big advantages going into 2022 and 2023 was that it had been doing AI stuff even if with humans in the loop for a long time, kind of like that labeling and stuff you were talking about. Does that ring true for you?
  10. Were you surprised when AlphaSense decided to acquire Tegus? And from your personal opinion, what do you think was the big benefit to AlphaSense?
  11. What about operational efficiencies in expert interviewing: How much progress do you feel like you made in cracking the nut of making that traditionally human process super efficient? And do you think that was also a factor in AlphaSense acquiring Tegus?
  12. I wonder if there were any other gaps in the market that you saw at the time. Where are investment data and research companies underserving end customers?
  13. You mentioned VCs. I think one issue for a lot of companies in this space is there's this big split in the markets. It's blurrier now, but there's still a big split between private and public markets. How do you think that impacted the business? Did that flow down to the product or engineering level?
  14. What do you think the endgame is for expert interviews? Is it armies of agents interviewing people and taking a lot of that friction out? Does that water down the value or quality? What do you see?
  15. And you came across experiments with automated calls?
  16. Who did you view as your competitors, and did that competitive set change over time in conversations at Tegus?
  17. So for Tegus expert transcripts ended up being a product or a feature within a broader platform. Do you think it's inevitable that companies in this space just try to add on more products, more features, more coverage, more data, and the game eventually goes towards breadth versus depth or one type of product?
  18. I wanted to ask about the four acquisitions: Canalyst, BamSEC, Fincredible, and Tegus. Obviously, that's a lot of integrating. Can you walk me through the acquisition headaches and how an organization at the scale of Tegus at the time was able to ingest three companies?
  19. I want to ask about the data aspect a little bit more. What about extracting data, actionable data, data that you can go on and structure in some way later from expert interviews themselves? Do you think that's something that's doable with a huge corpus of expert transcripts?
  20. So it's not necessarily identifying, oh, in this call, this expert actually gave the company's retention rate. It's not necessarily hunting for those really high calorie numbers.
  21. What about AI and other technologies' ability to sort of bring data and insight platforms closer to the transaction and investment point—or even just embed deeper into investors' workflows?
  22. So to your knowledge, they didn't do that? Expose the data to APIs?

Interview

I imagine people didn't think of expert interviews as a big tech or a software play when you started at Tegus. So I'm curious how you approached the opportunity and challenge at Tegus as an engineering leader?

That was one of the big shifts that Tegus embodied—they saw themselves as a data company, not a people company, which GLG thinks of itself more like a Rolodex company. There's obviously true tech companies out there, but we treated it more like a data company where it was very tech enabled, but the asset wasn't our technology. The asset was the data we were collecting.

The library of transcription of expert calls and then gradually getting into other datasets—that was happening as I was transitioning out. In the last six months, we were starting to really expand into other datasets. And that was a big shift for Tegus. It wasn't just a service that helps you find people. It was actually a data platform that gave access to lots of rich data. To be a data company, you need a technology platform to enable that dataset. When I joined, I didn't fully understand that what I was joining was this hybrid—not a services company, but not a pure tech company. It really was the data that was the asset.

When you think about expert interviews as data, can you break that down for me? Was there a strategy from the beginning to develop other kinds of data besides just having this expert transcripts library?

That's hard for me. I wasn't there from the very beginning. But Tom and Mike, the founders, always saw expert data as step one, and there were lots of other datasets that were really valuable in investment research. Mostly in investment research, though we also did stuff for consultants. The two big buckets of use of experts is investment and consulting, and then maybe moving into market research for product or marketing orgs. But the first two being the biggest, and they leaned more into the investment side because that's where they both came from.

They always thought about what investment datasets would be valuable, and they felt like they had a unique way to approach that through experts. Nobody had ever done the transcript library model before. Everyone had always said the job of an expert network like GLG is just to help someone that needs to find an expert, help them find the expert, and then get out of the way. Tegus' strategy was that data being captured during that call is valuable to a lot more people than just the customer. Basically like in this case, what we're doing right now. This content will be valuable to lots of other people beyond just you and your company.

The idea was making expert transcripts a dataset that is useful and actionable. As we evolved, we looked at what we could do to enhance those datasets both by adding other datasets and by making the data more discoverable. Tagging data. So that if this here is a transcript mostly about Tegus—well, other companies are mentioned in that call so that if somebody is doing research on GLG, but they mentioned GLG in the call, that can get tagged. Anyone doing research on GLG actually loops in and finds that Tegus is related and a competitor maybe.

Finding that connection, adding summaries to calls—all the things that right now are so much easier with AI—but at the time, we were investing in how do we add structure into an expert call? Same thing with investment reporting calls. We transcribe reporting or earnings calls. We bought a company that did transcription of quarterly reporting, and we captured that data as well. BamSEC had all the investment filings. The value was in bringing all those datasets together.

The idea was to harness the power of being able to have not just one dataset of expert calls, but a group of datasets all in one platform that can cross link and understand each other—that was the vision of being able to connect all that data together in one place.

In terms of the scaling challenges at that point in time, what would you say were the biggest scaling challenges overall? And how did they impact your engineering org?

The biggest ones were more on the operational scaling side. It was totally refreshing having come from a [B2C] company where our business had 60 million people coming every month. At Tegus, it was a business where the only people that came on our platform were our customers, paying customers, and they're paying a lot per seat. So the volume of traffic was tiny compared to what I was used to having to support.

The scaling challenges in our business all had to do with operational things. The cost of a human in the loop on trying to help connect experts to our customers—by the way that's what Office Hours is doing really well right now—trying to figure out ways to make it so that the efficiency of you needing an expert all the way to actually being on a call with the expert can be as efficient as possible both in terms of time and in terms of human capital involved in that process.

That was the biggest challenge. We had an army of 100—we called them analysts, but basically the operations people that actually were making the calls get arranged. What can we do to make their process as efficient as we possibly can so that you ultimately get access to the expert as quickly as you possibly can? That was the hardest part from a scaling perspective. You can keep throwing people at it, but that's really expensive. So that was where we saw the scaling challenges and opportunities.

If you had to single out a specific point of friction, is there any specific point of friction that comes to mind—even if it's anecdotal?

It was a few things. First is discovery. There's two big steps—discovery. The analysts spend probably at least half of their time, if not more, just literally finding the expert to actually be on the call. Trying to make that process of mostly just scraping through LinkedIn more efficient was a challenge. For us, that meant creating enabling tools that made the process of discovering a human, whether it was on LinkedIn or any other platform, but primarily on LinkedIn, more efficient. That meant actually figuring out if we could get that contact information and being able to have that contact information available, being able to trigger an outbound connection, whether that be an email or text or phone call. That process was what we spent a lot of time trying to operationalize and make efficient.

The next step in the process is even harder, but is a little less tech enabled—actually getting whoever the expert is you're reaching out to to respond and engage and convince them that they should be on the call. That part still requires some human touch. We could do a bunch of things that were automated, but ultimately, there's a relationship that has to be built there. That's still where the human process is valuable.

An expert transcript business is analogous to a classic two-sided marketplace situation. Did you do something to get the supply side—the experts—built up instead of waiting for a customer to want to connect to someone in an industry? Go out and interview a bunch of likely interview targets to fill up inventory, so to speak?

This would be a place where I was new to the space, so I won't profess to be as much of an expert as the founders were. But to me, that felt like a big opportunity. Our leadership team didn't have full agreement on that all the time. A few of us felt like that was a big opportunity to spend more time building a relationship with the experts, both pre-call and post-call. Both trying to actually discover possible experts for frequent topics that may come up and then being ready with them and having them on the ready. But even more than that, once one person has done an expert call, they've proven that they're an expert for somebody. That's a pretty good lead for future calls and building a relationship with them such that they have a good experience. Then it's easy for them to do the next call. It doesn't take as much effort to convince them to do the second and third and fourth call. That's also a big opportunity.

On both sides of that, there's a big opportunity. Honestly, Office Hours, which I mentioned, is much more proactive with doing that than Tegus was. It wasn't traditional to do that. Their perspective, which I trust—they had a lot of deep experience—was we can always scale this later with human touch in that process. But the relationship effort—doing that extra relationship stuff with the expert—is away from where the money is. That's putting time into the expert who we eventually want to make money from, but they're not actually paying us. They wanted the operations team to be spending more and more time with the customer and not the other side of the marketplace.

So in a marketplace sense, this was focusing more on the demand side versus the supply side.

They spent more time focusing on the demand side, not the supply side. To me, the supply side felt like an opportunity because what matters most—there's three things that matter most in setting up an expert. One of them, of course, is their expertise. Everyone knows that and expects that. The second one is efficiency, and then the third one is how often they're going to convert to a call. How quickly can you get a few experts in front of the customers so that they have options to say yes, that's actually aligned with what I want or no, that's not aligned with what I want. You've got to do that as quickly as you can. And obviously, they've got to convert at a pretty high rate.

Those two challenges—the conversion issue was expected. But what I wasn't expecting was the urgency around time. Do you need this call to happen in the next 24 hours, what difference does it make if it happens in 24 hours versus 48 versus 72? But the urgency on timing was always really acute. Our customers always wanted the call to happen as quickly as they possibly could. Often, that was a decision maker for them, much less than cost. Our platform, our model was such that you pay a much lower cost per call. Our cost was more like $300 or $400 a call as opposed to $1,000 a call as a normal market price. Because we were mostly monetizing subscriptions, that was just about pass-through cost. It was just making sure we subsidize the cost of the expert. But our real revenue generator was the subscription library, the subscription product.

How much was the subscription product for context?

As you'd imagine, it's a pretty wide range, but the ideal target at that time was about $25K just for one seat for the transcription library.

You mentioned earlier with AI now, some of this stuff is easier. At the late stage of your trajectory, how did you think about AI? And then what would you do differently today with the technology that's available?

We were just in the middle of hiring a team related to AI. As a technologist, of course, I lean on technology first. But I was bullish on doing a lot more with AI than some of the leadership team was. At the time, we were building a team to do a lot of tagging of content. It was going to be a hybrid of human and AI, but a heavy dependency on human labeling to do a lot of it. That was just as OpenAI exploded and changed the world. It shifted very quickly from wait a second, there is definitely no reason to have a bunch of humans driving this process. This should all be technology driven or AI driven.

That shifted quite a bit. A bunch of features like having a summary of a call—it's really annoying if you're like, hey, I think this call may be interesting to read through. It's going to take me 20 minutes to read through it and skim through that to figure out if it is actually interesting to your investment thesis or your consulting needs. Having a summary of that, having the highlights of the call—those sorts of things were a simple LLM call as opposed to it being a very complicated custom built model to do. So that changed completely.

I was told that one of AlphaSense's big advantages going into 2022 and 2023 was that it had been doing AI stuff even if with humans in the loop for a long time, kind of like that labeling and stuff you were talking about. Does that ring true for you?

Totally. They were way better. They had touted a really good search engine that was smart and AI driven. A lot of it—this is more from our biz ops team doing research, I didn't do it myself—but it seemed like it was just a relatively light set of synonyms and relatively simple search technology. But I will still acknowledge that their search was way better than our search at the time. It made discovery of content way better and more reliable, so you're not having to get exactly the right term. They knew they were better at that. They did always invest more into it. Honestly, LLM changed and leveled the playing field pretty quickly. You don't need much of that to be homegrown anymore. There's not too much of that that was really valuable as the LLM world shifted.

Were you surprised when AlphaSense decided to acquire Tegus? And from your personal opinion, what do you think was the big benefit to AlphaSense?

The transcription library was definitely the biggest benefit. And Canalyst. At the time we got acquired, we had three major assets. We bought four companies—three companies plus Tegus when I was there. BamSEC, which had all the SEC filing stuff. AlphaSense already had that. I don't really know what they're doing with that piece, but maybe there are a handful of features that BamSEC had that were really cool that were valuable, and I imagine that they try to leverage that. But in terms of a data asset, they already have that data. It's all public data inherently.

BamSEC, the value add for us was just some of the smart features they had on top of it and then being able to have that already in a database that we could link together with our database. So there's that—modest value probably to AlphaSense. Another Tegus acquisition was Fincredible, but that was not so interesting. In the sense that AlphaSense had the data already. It was mostly public data that Fincredible had access to.

But then Canalyst, the models are probably really valuable. I don't know this because I was not in the company at the time. But I'd imagine number one was our transcript library, and then the models that Canalyst built were also probably pretty valuable. I'm pretty sure that AlphaSense uses those. I don't know if they had a competing product by the time they acquired us, but I don't think that they did.

What about operational efficiencies in expert interviewing: How much progress do you feel like you made in cracking the nut of making that traditionally human process super efficient? And do you think that was also a factor in AlphaSense acquiring Tegus?

I'm sure it was a part of it, and I would imagine that we were better. The one that AlphaSense initially bought—Stream. I think we were way farther ahead than Stream, we had a better operation. Stream also had a transcript library. We were better on both the much bigger and better transcript library, and then also our process and human process was better.

Honestly, that was one of the strengths of Tom and Mike. It was a mix of technology, but really as much operational focus on just pushing on their operations team to be really, really good. They had a lot of metrics on what performance looks like for an ops person that is driving the matchmaking of an expert to a customer. They were just really good at that. They had the expertise in doing that from before. So they built a really good version of that. Medium-ly tech enabled. Honestly, there was a lot more opportunity to make it more tech enabled, but it was well enough tech enabled that I'm sure that was a part of the leverage in the acquisition.

I wonder if there were any other gaps in the market that you saw at the time. Where are investment data and research companies underserving end customers?

Out of all the questions you asked me, this is one that I would take with the most grain of salt because I still feel like an outsider of the category. Different than [other companies] where I was there long enough that I feel like an insider to it. I still feel like I got to be a fly on the wall for it for a little bit, but I'm still not as much of an expert as others. But from what we talked about internally, I'll just pair a little bit more of what we were talking about internally in terms of opportunities.

One was surveys. There's a bunch of survey product things. There are tons of survey companies out there. I don't think any of them really do it well in terms of turning that into a library of surveys that can be reused. So many of the surveys that need to be done are useful. Right now, think of all the AI tools that are out there and how much that stuff is changing every month. You can do an AI tool survey for every category—legal and tech and sales and customer support and all those things—and understand what are the best tools this month versus next month versus next month and see how that's shifting. That sort of data—there are great survey companies, but they didn't do a transcription library that made that data reusable across multiple customers.

There's also the calls that happen at conferences where you can basically pay to get in front of the CFO for two hours in a room, that whole same thing.

There was a term for it, I can't remember it now. But there's some concept of just getting the opportunity to speak to leadership. They can't be totally private because, obviously, then it'd be insider trading kind of stuff. But there's some mechanism like that where you could queue up a set of questions so you could get a bunch of your customers to queue up a set of questions. And then you actually do a discovery call with the CFO, and that has to be public. That could be a forum for allowing that to happen in different ways. So it was some form of Q&A that you got out of the CFO or leadership, somebody a key leader from a company. And then making that a part of a platform, that was another opportunity.

This is maybe a slightly different angle. But the other side that we thought about were other channels that could leverage this type of data. The two bigs were investment and consulting. And then a little bit, we were getting VCs to start doing it. So a different type of investor than traditional investors. And private equity, they were in the mix. They use experts a decent amount, not as much.

But actually getting product and marketing teams at companies to leverage expert content—that was an opportunity. Honestly, before, I was stunned at the idea that this existed. There were so many different conversations internally at Glassdoor that I bet could've figured out a way to leverage experts to help us understand our competitors, to help us understand some legal risk that we were worried about—all sorts of things that instead of going for really high-end consultants, just getting an hour of an expert's time would've been really valuable. That's an opportunity. Those were different categories in terms of expanding the market.

You mentioned VCs. I think one issue for a lot of companies in this space is there's this big split in the markets. It's blurrier now, but there's still a big split between private and public markets. How do you think that impacted the business? Did that flow down to the product or engineering level?

Totally. We started purely on public companies, which was the traditional market, and private companies were not traditionally a part of this kind of conversation, and we thought that there was no real reason not to do private. So we started doing private companies, and that was a big push for us for a lot of the time I was there. We had a dedicated strategic team focusing on getting experts from private companies and then also building relationships with mostly VCs to enable that side of the market.

It seemed like it was going pretty well. A lot of times the VCs were doing the research both for an investment decision, but also to then help them enable portfolio companies. They would actually use their research to hand off to their portfolio companies. Here's some insights that may help you.

What do you think the endgame is for expert interviews? Is it armies of agents interviewing people and taking a lot of that friction out? Does that water down the value or quality? What do you see?

It waters down the value I think. AI has basically—the information age was about making information easier, and it felt like that the last 20 years was about making information easier. AI just totally flattened that. There's no information edge—any information that's public is super easy at your fingertips to get answers to. AI just changed that game completely. What that actually makes experts more valuable. The only information that isn't—the stuff that's in people's heads that isn't written down on the internet. So experts in general have gotten more valuable because the other stuff has gotten so easy and so much less valuable. So inherently kind of converted.

I'm sure it is way better now. Backing up a little bit—four years ago, we were always exploring the idea of seeding content. If we wanted to get into a new market, we wanted to move into healthcare. We didn't have enough content about healthcare. So it's really hard to sell—the chicken egg problem. But you can't sell to healthcare customers until you actually start to have an inventory of healthcare transcripts. But we couldn't get transcripts because our business model to get transcripts depended on having a healthcare customer.

So we wanted to seed content, but the quality—the little bit that we would explore with it, if we just try to say let's go look at the top 20 interesting companies in healthcare and do 10 expert calls in each or five expert calls in each of them, the quality of that content was horrible because there was no thesis behind it. There was no—we would do a little research to prep for a call and then try to ask some questions, but the quality of the questions that were asked in those calls were not nearly as good as someone who has an investment thesis or is on a consulting project that they need to understand this company at a deeper level. When there was a thesis or an immediate need, then the quality of the questions and then, obviously, the quality of the data that comes out of those calls was way better.

I'm sure AI can do a better job at defining questions or coming up with questions, but there's some critical thinking in there that I'm not totally convinced that AI will do really well. So the idea of having humans involved in the process on both sides, not just the expert side, will still be valuable.

And you came across experiments with automated calls?

We didn't fully—my experience was at the time where agents weren't smart enough to actually do calls themselves. So we would have humans do them, but they were humans that were not experts in the category. We just had a handful of analysts that were actually proactively doing a little bit of research and then trying to do a call with a healthcare company or whatever company. And just the quality of the calls were not as good. Obviously, now you could have an AI voice do it, and they wouldn't be too bad. But the quality of the question would still be maybe not quite as good. I'm sure that's changing, but you can tell the AI prep agent to—you'd have a prepared agent to figure out the questions, then the agent actually does the call. And the preparation agent, you could make them have a little bit of strategic thinking to probably come up with pretty good questions, but I don't know if they'd be ever quite as good as the real humans. At least yet.

Who did you view as your competitors, and did that competitive set change over time in conversations at Tegus?

There's all the alphas. AlphaSense, obviously, is one. AlphaSights. That last one is the one that Mike came from. Those are the two biggest. Mosaic is the one that owned Stream. So initially, before they got acquired by AlphaSense, that was the company that owned them. So those three are the ones that came up. Most of them, at least. The old guard was GLG.

So for Tegus expert transcripts ended up being a product or a feature within a broader platform. Do you think it's inevitable that companies in this space just try to add on more products, more features, more coverage, more data, and the game eventually goes towards breadth versus depth or one type of product?

I didn't think this initially, but time really convinced me that breadth matters. There is no reason not to go into the other verticals. Once you have a customer that's willing to spend money with you to get access to investment research, if you have one dataset, they'll spend more to get the second dataset, and they'll spend more to get the third dataset. So I don't think there's any reason that you wouldn't want to go into breadth. It makes sense.

Ultimately, Tom and Mike really were thinking ultimately AlphaSense was the primary competitor that they were fighting against and hoping to beat. And I actually think that they still had opportunity. The decision to sell was a little bit more led by investors than it was about the leadership team of the company.

I wanted to ask about the four acquisitions: Canalyst, BamSEC, Fincredible, and Tegus. Obviously, that's a lot of integrating. Can you walk me through the acquisition headaches and how an organization at the scale of Tegus at the time was able to ingest three companies?

That was actually probably from an engineer, that was actually the most fun. In my previous life, I've been involved in a handful of acquisitions, but certainly none of that fast and having to do that quickly and trying to make that work both from a human perspective and then also from a technology perspective was one of the biggest challenges—was certainly the biggest challenge that I had to face while I was there.

We did a good job. I was proud of how we did that first in terms of the human process of getting everyone to feel a part of one team, but also still empowering them to run their own ship. But we were really also clear that the goal was to consolidate to one product. There was no benefit to maintaining multiple brands with Canalyst and BamSEC. Really, those are the three that—the Fincredible acquisition wasn't really that significant—they had some data that we kept and some technology we kept, but it was really mostly about the team that we acquired there. So I'll focus on the other two.

It was a slow process. You just do it gradually. That was the right thing. We try to figure out the leverage points and try to find the big opportunities to get value as quickly as we could and even build—a lot of that was really more through what felt more like partnership-y kind of things that it felt like a fully integrated product. We would allow our customers from each side to benefit from the other side. And then have easy handoffs from one product to the other, but it was still very loosely integrated from a technology perspective. And just gradually pulling features in. We knew it was always going to come to the Tegus platform. So just gradually pulling more of the features from BamSEC into our platform gradually.

The Canalyst was a little easier because mostly their dataset was 400 models, and it was just about linking off to a model. The models are more or less Excel spreadsheets. So that was much—and the backend, we didn't really need to integrate. So that was much easier from a technology perspective, but much harder from a human perspective because they were much bigger. And they were in Canada, and there was a lot more distance and complexity with that.

I want to ask about the data aspect a little bit more. What about extracting data, actionable data, data that you can go on and structure in some way later from expert interviews themselves? Do you think that's something that's doable with a huge corpus of expert transcripts?

It's just gradually making it more valuable. It is a long process to do really, really well, but you can find opportunities to provide meaningful value early and then keep just adding to it. As an example, one of the most valuable but relatively simple things to do was just cross-linking. Being able to go through a transcript, being able to do entity tagging, which is basically going through—Amazon has a service for this at the time that we were using. But being able to go through a transcript and being like, okay, this sentence says the word Zillow. This sentence says the word Glassdoor. This sentence says the word Indeed. And then knowing that Indeed is a company and then being able to map that string Indeed to the Indeed the company so that anyone looking at Indeed the company, they could click into that survey or that thing. Just being able to associate transcripts to all the companies that are reflected in it and then linking directly into the areas that it talks about their company because maybe Indeed is mentioned for 30 seconds of an hour call, and you don't have to go read through that whole thing just because it's linked. You want to actually drop into that section that actually has the Indeed-related content.

So that's a relatively simple technology thing to do with natural language processing, even pre-LLM world, and tagging it. That was a really nice win that didn't take that much effort. Summarizing calls—certainly now is super simple to summarize a call and pull out the highlights of it, not just pulling out what you think are the most interesting things, finding themes across a set of the transcripts. So you can find highlights across not just a summary of one transcript, but here's the themes of the most common questions asked across 50 transcripts for a company. Those are all things that are like, you're just trying to make it so that for the investor or the customer that's reading content, they don't have to pound through hours and hours of content to actually find the nuggets. And they still have access to all the details, but you're just trying to make it a little bit more efficient for them in every step.

So it's not necessarily identifying, oh, in this call, this expert actually gave the company's retention rate. It's not necessarily hunting for those really high calorie numbers.

You were doing that. I don't know because the Canalyst acquisition happened in the tail end for me. So that sort of stuff was super valuable for Canalyst because they're trying to build models. And most of that content we wouldn't really necessarily want to trust. We did tag—we were starting to tag some of that stuff and pulling it up in the summary sections for transcripts. But the good thing is as long as you link back to where it came from, even if you guess slightly wrong, you could link into it and then the user could validate, oh yeah, that's actually what they said or no, that's wrong. So you had a little trust. Very different from doing this stuff in Canalyst.

But it's really valuable if we could actually avoid the human cost. The nightmare for the Canalyst business was they literally had a team of analysts that would listen into a call and be writing down data and then be getting a live transcription that they were looking at. And then reading the numbers, copying and pasting, but literally doing it live during the call so they can get the updated model out within an hour or two or three or four. Being able to get AI to actually do that really well, that's what we were really working towards. That's what Canalyst was working towards, and we had some technology that we were going to leverage to help them with it. But doing that really fast—again, same thing with speed to market is that's a little more obvious. As soon as you understand and can build and update your model, you can then make a decision whether you're going to go long, buy some more or sell some more where the market adjusts. So you can understand why timing matters there. So that was such a sensitive timing thing that AI really—and it's tricky because you want to depend on AI, but if you get the model wrong, you're going to really piss off your customers.

What about AI and other technologies' ability to sort of bring data and insight platforms closer to the transaction and investment point—or even just embed deeper into investors' workflows?

We talked a lot about the idea of whether we should expose our data to APIs. Ultimately, to get really deeper in the workflows, that's what has to happen. If they have to come into our platform to get the data, then they're dependent on it. It's really hard to make it work into their workflow. So we really needed to expose APIs.

The reason we were very hesitant to do that at the time is because it's really hard to control APIs. It's really hard to limit access to it. It's a little easier when we literally—you can tell it's a human that's on a computer and we could see you logging in and we could see what IP address you're on and it's hard to know how many IP addresses should we be getting traffic from for a given business. And people can store it so easily. They can just take a copy of your data. How do you turn their—if you stop paying for it, but they've kept a copy of all your data. So the licensing models get a little trickier to deal with too because subscription products don't work nearly as well if they can just—the data gets stale, but still has value for a long time after the closure of the API. So pricing is a little tricky, but the more the data gets commoditized, the more it has to happen. Because that becomes a value. That's the value that's left. It's enabling it directly tied into workflow.

I have no perspective on how close AlphaSense is to doing that or if they already do some of that. I don't honestly know. But that was a thing that at the time, as they asked me some of those questions, I didn't feel comfortable that we were ready to take that on because once you open it, it's hard to close the floodgate.

So to your knowledge, they didn't do that? Expose the data to APIs?

They were exploring it. I know one of the engineering leaders that worked for me at the time I was there, he was working towards API-related stuff. I don't know the specifics about it, about what data they're willing to expose and how, but that was definitely something that's on the radar, and it doesn't surprise me that they would have to shift there. At the time I was there, we definitely felt like we weren't ready to. But it is a little bit inevitable given the evolution and commoditization of this data.

AI means the Canalyst stuff—the stuff that Canalyst used to do—isn't quite as valuable as it was before because so much of it can be done with AI. Maybe it's not quite as perfect as their data, but it's still 99% good or 98% there. That's probably plenty. So things will have to move that way for those companies to still compete against each other. Otherwise, it'll just be competition around the cheapest, lowest cost.

Disclaimers

This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

Read more from

Read more from

Read more from

Read more from