Shared Semantic Metrics Layer

Diving deeper into

George Xing, co-founder and CEO of Supergrain, on the future of business intelligence

Interview
You might even define it differently in two different dashboards in the same tool.
Analyzed 3 sources

The real problem is not bad math, it is that every dashboard author gets a fresh chance to rewrite the business logic. In the usual BI workflow, the tool sends its own SQL to Snowflake or Redshift, and the meaning of something like revenue lives inside each chart query. A product manager can exclude refunds, finance can include them, and both numbers still look polished because each dashboard runs successfully against the same warehouse data.

  • This happens inside one tool because each dashboard is often built separately, with its own filters, joins, time windows, and aggregation rules. If one team groups by booking date and another by recognition date, the metric name stays the same while the SQL underneath changes.
  • The warehouse alone does not fully fix it because precomputing a metric into a table locks in its grain. A revenue table cut by day and city cannot automatically answer a new question by month and product line, so teams fall back to new ad hoc queries and definitions start drifting again.
  • That is why the modern stack keeps pushing metric definitions into a shared semantic layer near transformation logic. dbt's view is that metrics should be written once and reused across notebooks, dashboards, and catalogs, instead of being redefined in every BI surface or proprietary language like LookML.

The direction of travel is toward a common metrics layer that sits above raw warehouse tables and below every app that consumes data. As analytics spreads beyond dashboards into planning tools, reverse ETL, and operational software, the companies that control shared metric definitions will become the trust layer for the whole data stack.