Semantic Layers as AI Rules Engine

Diving deeper into

Leah Weiss, co-founder of Preql, on delivering clean data to LLMs

Interview
we stopped talking about semantic layers for a long time, and now finance people and business people are using the term before we do.
Analyzed 5 sources

This marks a buyer shift, semantic layers moved from a data team nice to have into a control point for AI driven reporting. The old pitch was write metric logic once for dashboards. The new pitch is stop chat interfaces and copilots from giving different answers to the same finance question. That is why finance teams now care first, they own reporting that has to be repeatable, governed, and trusted.

  • Earlier semantic layer products were aimed at analytics engineers, who had to write and maintain extra logic on top of already heavy dbt and pipeline work. The payoff felt indirect because business users still mostly lived in BI tools built around tables, columns, and ad hoc queries.
  • The LLM wave changed the economics. Once executives started asking for natural language access to company data, inconsistent metric definitions became an obvious failure point. Preql is built around cleaning messy source data, then building a business map of definitions so downstream chat, BI, and internal agents hit a governed intermediary layer.
  • The rest of the stack is moving the same way. dbt argued metrics should live in a reusable layer outside any one BI tool, and cloud platforms now expose metric or semantic models directly for AI and dashboard products. Databricks says metric views create centralized, governed business metrics that feed dashboards and Genie, and semantic metadata improves LLM accuracy.

The next step is that semantic layers stop being discussed as analytics plumbing and start acting as the rules engine for AI at work. The companies that win will be the ones that make a revenue, margin, or headcount question return the same approved answer in Excel, BI, chat, and eventually automated workflows.