FactSet Auditable AI Workflows

Diving deeper into

SVP of Technology & Product Strategy at FactSet on driving trust through auditability

Interview
Clients were consistently telling FactSet that they were ahead of their competitors
Analyzed 7 sources

The strongest signal in those early reactions was not excitement about AI in the abstract, but confidence that FactSet had moved from demo mode into usable workflow software. Clients were seeing working tools, beta access, and cloud delivery options tied to real financial data, which mattered more than slideware because analysts and bankers need answers they can trace back to sources and plug into existing research and modeling workflows.

  • FactSet turned the rollout into hands on exposure. Mercury launched through the FactSet Explorer preview program under its AI Blueprint, giving users a live chat interface over financial and regulatory data instead of a future roadmap. That made the product feel testable and operational early.
  • Positive response was strongest where AI saved concrete work. Mercury, Conversational API, portfolio commentary, research management, and pitchbook creation all reduced manual searching and drafting inside tools clients already used, including Microsoft Office and internal research systems.
  • The main skepticism was accuracy in high stakes finance, which is why auditability became the key design choice. FactSet emphasized source linked answers and the ability to keep client data inside the client network. That fit how firms evaluate risk before they widen adoption.

This points to AI becoming another layer of the FactSet workstation and data stack, not a separate experiment. As cloud partnerships with Snowflake and Databricks expand and conversational tools are embedded directly into client systems, the advantage shifts toward vendors that can combine trusted data, workflow fit, and auditable output in one product surface.