SVP of Technology & Product Strategy at FactSet on why auditability can matter more than accuracy

Jan-Erik Asplund
View PDF
None

Questions

  1. Let's kick off with some context. Over your long tenure at FactSet, how have you seen the investment data platform evolve?
  2. Taking a step back, you mentioned generative AI and the importance of open technology as recent game changers. Can you expand on which technology shifts really stood out to you over the years, either in terms of delivering a competitive edge or fundamentally changing client workflows?
  3. How did client needs evolve alongside these technology shifts? Did you see new demands or priorities emerge as FactSet adapted to open platforms and AI?
  4. Thinking about today's landscape, which key trends in the financial industry do you see shaping the demand for platforms like FactSet?
  5. Given these trends, how do you see emerging needs like ESG data and alternative datasets creating new opportunities for FactSet and platforms like it? And how is FactSet positioning itself to meet these emerging needs?
  6. Let's look ahead now. Over the next 5 to 10 years, which emerging trends do you think will most impact the investment data space?
  7. Do you envision FactSet playing a major role in enabling these AI-driven workflows? And if so, how might its platform need to adapt over time to stay ahead?
  8. Could you speak more about how clients responded specifically to these AI solutions? Were there any notable reactions, either positively or areas of skepticism? For example, with the AI blueprint rollout or beta testing?
  9. With these successes, do you see AI assistants like Mercury, FactSet's conversational UI, becoming standard tools for analysts? And how might adoption vary across different types of financial professionals?
  10. Looking at broader adoption, do financial professionals generally see AI automating these kinds of tasks as positive? Or does skepticism remain regarding accuracy and control?
  11. How does the AI-powered Pitch Creator work? And which specific problems was it designed to solve?
  12. Let's dive into a different area then. For FactSet's differentiation, what truly sets FactSet apart from major competitors like Bloomberg, S&P CapIQ, and others?
  13. Can you share more about how these elements, particularly the trusted relationships and collaboration with clients, influence retention or expansion opportunities with FactSet?
  14. Are there any specific examples or stories that come to mind about how this support model directly influenced client satisfaction or led to expansion of services?
  15. Let's shift gears a bit. FactSet's generative AI assistant Mercury was a major launch. Could you walk me through what motivated its development? What client problems it was built to solve?
  16. What feedback have you received from clients using Mercury? Does any specific use case or example of its impact stand out?
  17. Sure. Let's talk about competition then. How does FactSet approach competing with Bloomberg? And in which areas do you see FactSet outperforming Bloomberg?
  18. Got it. I appreciate you flagging that. Let's go deeper into technology and AI then. You mentioned earlier that Mercury is also available via API. How is FactSet adapting to clients who want to pull more data or AI capabilities into their own internal applications or workflows?
  19. Let's go a bit deeper on Gen AI applications. What were the key messaging goals or go-to-market considerations when launching Mercury or Pitch Creator? Can you walk me through a specific campaign or rollout that worked well or anything you might revisit in hindsight?
  20. We're nearly at time, so just a couple final wrap-up questions. Looking ahead 2 to 3 years, what themes do you expect to dominate AI roadmaps in the financial data space?
  21. What's your vision for FactSet's role in the industry over the coming years? How might its mission evolve as the market changes?
  22. Picking up from Mercury's role, since it's built both as a conversational UI and an API, do you see more client usage emerging inside the FactSet interface or externally as part of client-built workflows? How are teams prioritizing which user paths to expand?
  23. Shifting back to Mercury itself, do you have any insights on how customer behavior, usage patterns, or metrics are tracked to measure its success? What signals have emerged to show that Mercury is creating real value?
  24. Before we wrap, is there anything else you'd add that we haven't covered yet around Mercury, AI integration, or platform evolution that you think would be helpful for founders or investors to understand?
  25. Let's briefly touch on FactSet's AI history first, particularly how machine learning was applied before LLMs. Could you walk me through how FactSet integrated machine learning prior to the rise of LLMs? What were the key applications or successes?
  26. Really appreciate you walking through that. Let's briefly wrap on Mercury itself now. Could you outline a few specific tasks that users are able to complete inside Mercury today that previously may have taken much longer or required proprietary knowledge of the platform?
  27. Is there any final thought you'd like to leave on where AI and Mercury's roadmap head over the next few years?

Interview

Let's kick off with some context. Over your long tenure at FactSet, how have you seen the investment data platform evolve?

I've been at FactSet for a very long time. When FactSet started, it was creating printed reports, so it was pre the days of downloading and online connections. They had this four-page proprietary report called the Company FactSet, which is where FactSet got its name, which they actually used to bike messenger over to clients. From a technical capability, if you were running a complex report, you'd kick it off at the end of the day and it would run overnight. From a UI and workflow capability, most of the applications that were offered were individual applications that sat on top of individual databases. They might provide reports or charts for pricing or for fundamentals or for estimates, but those reports were not all combined necessarily.

Obviously, a lot of that changed. Some of the earliest breakthroughs were the ability to download data. For FactSet in particular, what we call universal screening, which was storing data in a rotated database so you wouldn't just get every piece of data for one company, but get that same piece of data across multiple companies. We also early on considered FactSet to be a “data Switzerland” where we had multiple sources for major data types but we didn't provide our own. There were multiple fundamental data sources, multiple estimate sources you could get. Our clients could get data from I/B/E/S or from First Call, and so there was some competition there. We were not a data collection shop. It wasn't a business that FactSet wanted to be in.

But there was a lot of consolidation in the industry. Thomson, for example, bought Primark, which owns I/B/E/S. So Thomson bought First Call and merged those two products into Thomson Estimates. All of a sudden, there were fewer alternatives for FactSet clients. They couldn't price compare between I/B/E/S and First Call for estimates. That obviously put on some pricing pressure. A strategic consultant came in and said FactSet was going to have to start providing its own data.

That's when FactSet started to collect and buy. FactSet bought JCF Group for estimates in 2004. They created a product called FactSet Estimates. They licensed Worldscope fundamental database from Thomson and eventually replaced it with FactSet Fundamentals. In 2012, they purchased StreetAccount for news and market summary. Then bought TruValue Labs in 2020 for AI-powered ESG.

They've done about a couple dozen acquisitions since 2000, maybe half of those are content providers. The other big trends over the past couple of decades is obviously the importance of private company data, deep sector data, and ESG data, as well as alternative niche data. Firms also are becoming more sophisticated and wanting to build their own solutions. So the importance of open platforms, APIs, content integration, and meeting those clients where they are in terms of getting data onto cloud services like Snowflake. FactSet was the first to bring time series chip data to Snowflake. And most recently, cloud, APIs, open technology, and the emergence of generative AI has been a game changer for how financial platforms can service clients.

Taking a step back, you mentioned generative AI and the importance of open technology as recent game changers. Can you expand on which technology shifts really stood out to you over the years, either in terms of delivering a competitive edge or fundamentally changing client workflows?

Obviously, the early changes like the ability to download and screening were critical. But in most recent years, I would say APIs and AI. On the APIs and open platform side, a lot of the large clients are building their own proprietary solutions, building their own machine learning services. For all those clients building AI services, they really need our data. They need good clean data and lots of it. They also really like our APIs and our prebuilt reports and charts that they can put into their software, as well as specific technical functionality that we offer. And obviously, the data feeds.

From an AI side, we have everything from smaller features like a version of NER, which is named entity recognition as an AI service that is tailored to the financial industry and does concordance with popular symbol types, as well as AI-powered signals, for example through Mercury, which is our conversational UI available via API. AI is just important for financial services in particular because that primary feature of allowing quick summarization of insights from massive amounts of text is a huge challenge and benefit to financial service users.

How did client needs evolve alongside these technology shifts? Did you see new demands or priorities emerge as FactSet adapted to open platforms and AI?

Yes. With regard to AI, trust and explainability are more crucial than ever. It was always important. Clients always wanted to be able to have auditable financials and see where the data was coming from. But particularly when you're talking about AI and predictions or generating answers, even using something like RAG, which is retrieval-augmented generation, so you're getting answers from trusted content and you're able to link back to those sources—so the client knows exactly where it's coming from and knows that it's coming from trusted metadata, not hallucinations from a nondeterministic LLM knowledge set that's got an expiration date.

For AI predictions in particular, for future predictions, it's important for them to not just see a few stats that show that these predictions are good. They want a length of time, a long track record showing ahistory of these predictions, so that they can make bets themselves.

On the API side, obviously security is a big deal. Many clients, especially the larger portfolio managers, want solutions on-prem, not cloud. The text processing power of AI is really enabling clients to do more with the same or fewer people. So the impact on hiring and retention at the client site is significant.

A lot of those larger clients who wanted to do an open platform so that they can integrate your content and functionality into their own proprietary systems discovered that the engineering required and hiring software engineers and especially hiring and retaining machine learning engineers just wasn't feasible. It was more difficult to hire and retain those really talented folks than they thought, especially in the finance industry as opposed to in a core technology industry. So they ended up preferring to get more complete solutions, going from maybe just a content feed to a technology API to embedded reporting components and charts that they could use directly. The bigger clients, though, do have large engineering shops and are able to do more on that end.

Thinking about today's landscape, which key trends in the financial industry do you see shaping the demand for platforms like FactSet?

Well, obviously for a few years we've seen fewer companies going public or taking longer to go public. So the need for private company investing and data is really crucial, and other alternative investment vehicles. Very recently, Trump said that he's considering an executive order that would allow 401(k) plans to invest in private equity. Private company data is obviously not that easy to get. There's no reporting regulations. There's a number of different ways to do that. So getting private company data is still a really high-priority trend, and the need for alternative data in general—shipping and supply chain data, sentiment, store traffic, credit cards, that kind of data.

Another really important feature that is a differentiator is taxonomy and industry classification. That's also especially true for private companies. FactSet has a product called Revere that helps with that. ESG continues to be of interest with differing importance in differing regions. It's still pretty important in Europe, maybe dropping in importance in the US. And then deep sector data is another really important need.

Given these trends, how do you see emerging needs like ESG data and alternative datasets creating new opportunities for FactSet and platforms like it? And how is FactSet positioning itself to meet these emerging needs?

Certainly when it comes to things like deep sector data and private market data, there's a lot of opportunity for investment data platforms to take market share from other firms and vice versa. Each have their own specific area of strength. FactSet's certainly looking to go deeper in deep sector data to compete against S&P CapIQ. They're known for their deep sector data. Orbis and Bureau van Dijk [acquired by Moody’s] are known for their private company data. So those are a couple areas where FactSet is really trying to grow market share, and competitors and others are trying to grow those levels of strength via a combination of acquisitions and doing their own data collection. It's also an opportunity for startups and small companies in those niche areas to potentially get acquired by a larger company.

Let's look ahead now. Over the next 5 to 10 years, which emerging trends do you think will most impact the investment data space?

AI is the hottest trend right now, and specifically within AI, the concept of AI agents and digital labor. You can imagine that more firms are going to be using AI agents, so it's going to be even more important for the large data platforms to have ways that those agents can access data. So APIs for all their different types of data. You'll also see mid-level firms and retail using AI more. It's a lower cost way to get at some of the analytics and insights that were previously less accessible to those entry-market firms leveraging AI. And you'll also see probably a lot more robo-investing with AI.

Do you envision FactSet playing a major role in enabling these AI-driven workflows? And if so, how might its platform need to adapt over time to stay ahead?

FactSet has been making investments in having an open platform for quite some time. In 2019, FactSet announced a three-year, and large, three-year investment plan for both content and technology. A large enough investment that they announced to the Street to be building a leading content analytics platform. Some examples are the APIs that FactSet helps clients with that are also used internally.

On the content side, the big ones again was deep sector data, private companies data, and ESG. But also recently, they have been doing a lot of partnering with technology firms like Snowflake, AWS Data Exchange, Databricks. Databricks recently announced their package is empowering FactSet for financial services.

FactSet also made aggressive investments in AI, time frame wise. They released an AI blueprint before any competitors released anything serious, and they did a large client roadshow. They had a lot of early products and beta products to show. Clients were consistently telling FactSet that they were ahead of their competitors, providing real solutions and giving them beta access to test out products while a lot of the competitors were still just showing them PowerPoint.

Could you speak more about how clients responded specifically to these AI solutions? Were there any notable reactions, either positively or areas of skepticism? For example, with the AI blueprint rollout or beta testing?

Clients were really happy. The biggest thing that stood out to me when I was there was how many clients were saying what you're doing is real, what you're doing is far advanced from what we've seen. The other thing that clients really liked was that everything that we did with AI, we would have an auditable solution. For example, we built a portfolio commentary product that generated portfolio commentary for clients. Basically, every line or every couple of lines would link back to where in either some news story, research report, some conclusion was being drawn, or if it was a financial thing, where in the portfolio, what components of the portfolio or benchmark were being referenced—so that clients could have trust and faith in the AI being generated.

With these successes, do you see AI assistants like Mercury, FactSet's conversational UI, becoming standard tools for analysts? And how might adoption vary across different types of financial professionals?

Absolutely. If you think about the amount of structured and unstructured text that analysts have available to review to get an edge on their competitors—it’s unlimited. But their time is not unlimited. So AI makes them a hundred times more efficient. It allows them to consume significantly more information and insights without having to slog through thousands of pages of text and numbers. Analysts are simply not going to be able to do their job without leveraging AI assistants.

When it comes to something like portfolio generation, the clients were very happy with it. They liked that they could take the parts that they wanted. They could tweak it if they wanted to add some of their own flavor. But it was also something that they could personalize to be more like the types of reports that they normally would write themselves, and the report coming back was saving portfolio managers a lot of time.

Looking at broader adoption, do financial professionals generally see AI automating these kinds of tasks as positive? Or does skepticism remain regarding accuracy and control?

For sure, there are concerns with accuracy and control in general with AI. Clients absolutely need to be able to trust it. So auditability and source links are really critical. Once that's there and they know that they can actually see where the data came from or how the conclusions were drawn, they're a lot more comfortable with it. And then automation is key, especially for bankers and analysts performing decks and sourcing data to find relevant insights, scouring tons of unstructured data. It's really not the best use of their hours when that kind of thing can be automated. So firms and individual users certainly appreciate the assistance from AI as long as they can trust it.

How does the AI-powered Pitch Creator work? And which specific problems was it designed to solve?

Pitch Creator is a product for bankers that does deal presentation automation. It builds on Mercury to automate things like tombstone generation, slide creation and customization, including things like branding slides and refreshing. The features are called re-slides, template retrieval, model analysis, and also just merging research into pitch decks. So it really dramatically cuts down manual effort for bankers.

Let's dive into a different area then. For FactSet's differentiation, what truly sets FactSet apart from major competitors like Bloomberg, S&P CapIQ, and others?

Client trust and collaboration, very good relationships with our clients. FactSet has a stellar technology team. It's got really high quality, trusted data, and a long history of data. The investments that FactSet has made in content and technology are really crucial, particularly with things like AI and how quickly it was able to leverage large language models and generative AI.

Can you share more about how these elements, particularly the trusted relationships and collaboration with clients, influence retention or expansion opportunities with FactSet?

FactSet really values client relationships and prefers to drive its client retention via delivery of value and not contractual obligation. When it comes to support of clients, that's something that FactSet really puts a lot of importance in, making sure the clients get the best help and value they need from FactSet and leverage the data as much as possible.

From an onboarding and implementation perspective, there are dedicated implementation services teams that can handle especially complex deployments at large clients. In some cases, I did a review of an employee who apparently, the client, someone at the client thought that they worked there because they were there so often. So they really helped to ensure seamless setup and configuration that's tailored to client workflows.

Some clients also get 24/7 access to client services and product specialists. They get assigned account managers and consultants to help them customize the product for them, tailor their usage, resolve issues, and drive adoption. FactSet has a big training and enablement program. There's something called FactSet Academy. There's online learning. There are on-site training sessions that they'll do, and just a lot of very comprehensive and easily searchable learning materials that are also available via FactSet Mercury, the conversational chatbot, that help you stay up to speed.

Then really solid technical support and issue tracking. Just a very robust support infrastructure where clients can call, consult, or they can file tickets directly if that's their preference. There are product specialists who will talk to them. There are teams dedicated to doing things like helping them blueprint their existing IT processes and see how they can save money, whether or not they're planning to use FactSet. A service that is free. All these things really help clients get up and running and stay satisfied and make FactSet a sticky service for them.

Are there any specific examples or stories that come to mind about how this support model directly influenced client satisfaction or led to expansion of services?

The one example that I had was the client who thought that a FactSet support person worked there at the company because they were there so often. It's safe to say that clients like that consider FactSet support people a part of their own team.

Let's shift gears a bit. FactSet's generative AI assistant Mercury was a major launch. Could you walk me through what motivated its development? What client problems it was built to solve?

Mercury was certainly motivated by the release of ChatGPT and the incredible opportunity that we saw there. FactSet is such a large product with hundreds of databases, thousands of different reports. It can be difficult for clients to discover all that it does and not only know all the features that it has and all the different reports and datasets that are available. They may, in some cases, be paying for and using some other third party when they already have access to that data through FactSet. So really discoverability was really important for us.

Finding the exact answers that they want to specific financial questions can also be difficult. You could find almost anything on FactSet, but knowing in some cases what report to go to or in some cases how to construct a formula to return that value can be pretty cumbersome. We didn't want clients to have to become experts in FactSet query language.

So LLMs, when they came out and as they got better and better, they really enabled clients to do all of this discoverability, finding answers and understanding what was out there with natural language without having to memorize commands. So it's really a no-brainer to provide users with a conversational interface for interacting with the FactSet workstation.

What feedback have you received from clients using Mercury? Does any specific use case or example of its impact stand out?

I don't have one that I can share.

Sure. Let's talk about competition then. How does FactSet approach competing with Bloomberg? And in which areas do you see FactSet outperforming Bloomberg?

I'd like to skip the questions on the competitive landscape for internal reasons. But I have a lot more to say about technology and AI integration if you've got questions about that.

Got it. I appreciate you flagging that. Let's go deeper into technology and AI then. You mentioned earlier that Mercury is also available via API. How is FactSet adapting to clients who want to pull more data or AI capabilities into their own internal applications or workflows?

FactSet is really supportive there. We talked about how FactSet made a large three-year investment to become more of an open platform focusing on things like AI and cloud. In particular, FactSet is really supportive in meeting clients where they are. We talked about the real-time data available in Snowflake. Clients who want to get data from us have many different ways. We have an Open FactSet product which clients can use. We have a product called Cornerstone [scheduling and calculation engine]. There's various ways of getting data feeds. There's the recent AWS Data Exchange integration. Of course, all of the various APIs that we have available. So really meeting clients where they want to be in terms of pulling data into their own applications as well. The Open FactSet Marketplace has third-party data and technology as well as FactSet technologies.

Let's go a bit deeper on Gen AI applications. What were the key messaging goals or go-to-market considerations when launching Mercury or Pitch Creator? Can you walk me through a specific campaign or rollout that worked well or anything you might revisit in hindsight?

I'm not sure I have a lot of detail about that. I know that FactSet did more press releases with regard to AI products than it typically did. It didn't typically do press releases for products, but it did for AI. Those went well. They have a FactSet Insights blog where they did a lot of AI articles that were general thought leadership articles that went really well and increased the drive of traffic to FactSet for AI-related topics even though they weren't specifically necessarily about the AI product. That FactSet Insights piece does typically not talk about products. So they were more general AI thought leadership pieces that were well received. FactSet also did some videos and marketing about our AI leadership on platforms like LinkedIn.

We're nearly at time, so just a couple final wrap-up questions. Looking ahead 2 to 3 years, what themes do you expect to dominate AI roadmaps in the financial data space?

I would expect that the prevalence of AI agents will be a big deal and the ability of financial data platforms to serve up data to financial agents as well as to provide transparency and that kind of click-back feature where you can find out auditability and exactly where that data is coming from.

What's your vision for FactSet's role in the industry over the coming years? How might its mission evolve as the market changes?

I'm happy to do another 15 minutes if you want to talk more after that about Mercury and other AI and technology integration. As far as FactSet's mission evolving, certainly a continuation of the open platform and having APIs being more important as digital AI agents become more accepted and more leveraged in the financial marketplaces.

FactSet will continue with its three core pillars, which are expanding its data offerings, especially in the areas of deep sector, private markets, and ESG data. I expect to see further client workflow integrations across firm types. On the institutional buy side: portfolio performance, analytics, and risk management. Wealth is a very big deal with a growing advisor desktop, growing prospecting. There's a lot of opportunity for AI in prospecting as well, and digital reporting. Then banking being the other main client firm type with really big deals to have advanced automation in the banking side research, financial modeling, and pitch creation. We talked a little bit about Pitch Creator.

The third pillar would be AI and innovation. Obviously, there's really some very large products just portfolio commentary, transcript intelligence for analyzing earnings calls, and requesting FactSet data with natural language through things like Mercury. The Mercury conversational UI, we can expect to see it really released throughout all the core products and workflow-specific uses across the client base.

Picking up from Mercury's role, since it's built both as a conversational UI and an API, do you see more client usage emerging inside the FactSet interface or externally as part of client-built workflows? How are teams prioritizing which user paths to expand?

There's a lot of people working on Mercury, so both are going to happen. For example, if there are teams that are focused on a portfolio product, those people are going to be doing what they were doing anyway, plus figuring out the best ways to integrate the Mercury capability into the portfolio product. Meanwhile, there's a completely separate team that's working on the core functionality of Mercury. So that's going to keep going. Additional teams are packaging and selling the conversational UI API.

So those are, my guess would be full speed ahead on all of that. Clients are really eager to leverage AI. Their leaders are saying we need to be using AI more. And rather than trying to build from scratch, to be able to take something that already works really well, is heavily tested, is heavily vetted, is leveraging trusted data, the fact that they can just plug that into their existing ecosystem is extremely valuable to them.

Shifting back to Mercury itself, do you have any insights on how customer behavior, usage patterns, or metrics are tracked to measure its success? What signals have emerged to show that Mercury is creating real value?

I don't have recent data on that. But I can say that one of the things that Mercury provided was feedback from clients so that they can for every single answer give an up, down. So I know that that is being tracked so that accuracy is being tracked. And that those things are being looked at and improved over time. But I don't have statistics on usage data.

Before we wrap, is there anything else you'd add that we haven't covered yet around Mercury, AI integration, or platform evolution that you think would be helpful for founders or investors to understand?

We could, if you want, talk about some of the AI history and how machine learning was used prior to the big LLMs, or we can dive into some of the different capabilities that Mercury provides.

Let's briefly touch on FactSet's AI history first, particularly how machine learning was applied before LLMs. Could you walk me through how FactSet integrated machine learning prior to the rise of LLMs? What were the key applications or successes?

The very earliest use of machine learning and natural language processing at FactSet was mostly around our internal operations around content collection. You can imagine the unstructured data, research reports, news, filings, pulling data out of tables and leveraging AI to do that more efficiently in addition to human-in-the loop and humans making sure that all that data was being pulled out correctly. We obviously have a very large data collection human team. But augmenting them so that they can do more. So that was the earliest, about 15 years ago. FactSet was using natural language processing for that.

The big things in the right before, pre-LLM era were predictive insights and signals. FactSet had a bunch of signals that were not necessarily AI-based, but then also created a bunch of predictive signals using machine learning and won awards for its signal offerings. On the AI side, these were things like shareholder activism vulnerability, likelihood of secondary offerings, the likelihood of corporate bond issuance, high impact transcripts, which was a service that looked at the language in earnings calls to predict large price movements in individual stocks. We also have something that predicts inclusion or removal from the S&P 500. And things that look at text and transcripts to pull out significant risks or big changes from one quarter to another so that analysts could focus on specific transcripts.

So signals and predictive insights was one of the big things. Another one was intelligent reports and search. That continues with the newest LLMs just getting better in terms of comparing historical filings, being able to interrogate transcripts and not just a single transcript, but across industry. So who in a specific industry is talking about supply chain disruption, for example?

Also using AI to enrich data and put together or generate alternative data. So enriching patent data, shipping data, private company data. There were various open services and APIs. Like I mentioned, the NER service and various text analytics services that use natural language processing that were available to clients via APIs. Things like document relevancy were also leveraged in our content collection. So of all the documents produced on companies, who should be reading this first? We need to process this document, and that's obviously relevant to our clients as well.

Tons of different data services for private companies, generating company descriptions, generating industry classifications, estimating revenue, finding news stories and extracting info from those news stories. And then the more recent things we talked about: Mercury, portfolio commentary. There are some personalization services as well. So you've got hundreds of possible signals and alerts available. They figure out which ones will be most interesting to you specifically because you look at these reports and these companies and these industries, so you're likely to be interested in them.

Really appreciate you walking through that. Let's briefly wrap on Mercury itself now. Could you outline a few specific tasks that users are able to complete inside Mercury today that previously may have taken much longer or required proprietary knowledge of the platform?

Absolutely. The number one first thing that it enabled was natural language company and market research. So pulling together financial fundamentals, pricing, regulatory data, SEC filings, news and more, synthesizing all that information into source-linked research. In addition to simply asking questions and getting answers back with the sources, it can perform tasks like doing a SWOT analysis from a 10-K. So you just don't have to necessarily read all 50 pages of some research report or a 10-K. It can also suggest next best options.

In addition to that kind of asking questions, getting an answer, it has a product called Chart Creator. So that takes natural language prompts and it can generate pitch-ready charts. You can specify parameters, and it can create that chart and brand it as you want to and also insert it directly into your PowerPoint.

We talked a little bit about Pitch Creator already for bankers. Another really big feature is the transcriptions system. So that adds intelligence to earnings calls. It's got an interactive chat over transcripts. And by default, it will do things like summarize key themes or what is the updated guidance. Some insights that it might pull from the Q&A section or some visualizations and some sentiment in the transcript. And in addition to some of these precanned queries, it also enables ad hoc natural language querying. So you can do things like what are the three main pain points that they talk about, or are they talking about supply chain? Are they talking about the war in Ukraine?

Is there any final thought you'd like to leave on where AI and Mercury's roadmap head over the next few years?

There's some other things that were released that we touched on, like portfolio commentary and research management. You're going to see more as those users get more comfortable with those kinds of features. It also integrates with research workflows like IRN [FactSet’s Internal Research Notes]. So the buy-side analysts and the PMs can take in data, share it, and search their own internal insights.

The embedding and extending of the conversational API is really going to be a big deal. So exposing those core capabilities across to clients directly. Thinking about RAG, which is retrieval-augmented generation, not only with trusted FactSet data, but also being able to access clients' own data without leaving their network. And still having those auditable source-linked answers whether it's data, file retrieval, chart retrieval, etc. So clients can white label and integrate those conversational experiences into their own tech stacks. Those are going to be some of the things that you'll be seeing.

Disclaimers

This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

Read more from

Read more from