FactSet Builds Data Moat

Diving deeper into

SVP of Technology & Product Strategy at FactSet on driving trust through auditability

Interview
A strategic consultant came in and said FactSet was going to have to start providing its own data.
Analyzed 6 sources

This was the moment FactSet stopped being only a neutral shelf for other vendors’ feeds and started building the data moat that now anchors its workstation. Once consolidation reduced client choice in estimates and fundamentals, owning the collection process became a way to protect pricing, preserve access to must have content, and bundle more of the workflow inside one product, from numbers in Excel to news, transcripts, and ESG signals.

  • FactSet’s 2004 annual report says industry consolidation made it strategically necessary to control critical content. The JCF acquisition gave it a proprietary estimates database, an additional option for clients, and a direct foothold in broker consensus data instead of relying only on outside suppliers.
  • By 2012, this had expanded from one dataset to a broader owned content stack. FactSet listed proprietary fundamentals, estimates, ownership, M&A, events, transcripts, filings, and benchmark data, and added StreetAccount so users could filter real time news by portfolio, sector, and market inside the same workstation.
  • That shift matters competitively because incumbents in financial terminals win by bundling hard to replace data into daily workflows. Later additions like Truvalue Labs show the same playbook continuing, adding differentiated ESG content that can be fed into screening, monitoring, and now AI products built on trusted internal data.

The next phase is turning owned datasets into machine readable, source linked answers. As financial research moves toward conversational interfaces and embedded APIs, the vendors that own more of the underlying content will have the clearest path to package data, workflow, and AI together in one seat.