Data stack rebundled around trusted AI
Leah Weiss, co-founder of Preql, on delivering clean data to LLMs
The real problem was not buying data tools, it was buying a permanent staffing and coordination burden that most companies could not turn into measurable business value. A modern stack usually meant warehouses, connectors, transformation code, BI, and specialists to keep definitions aligned across dashboards and spreadsheets. That could make sense at very large companies, but for smaller teams the spend often outran the actual decisions, products, or revenue gains the system improved.
-
A failed ROI case usually looked simple in practice. A company paid for Snowflake, Fivetran or Airbyte, dbt, and a BI layer, then still relied on Excel for cleanup and metric reconciliation. The software bill was only the start, because the hard part was ongoing human maintenance.
-
Earlier semantic layer products also struggled because they asked data teams to write even more code for business definitions, while most BI tools still exposed data as rows and columns. The work to encode metrics was real, but the payoff to business users was weak and delayed.
-
AI changes the ROI math because chat and copilot products like Glean and Hebbia create immediate demand for trusted answers, but only if the underlying data is cleaned and definitions are standardized first. That makes the prep layer easier to justify than another dashboard project.
This is heading toward a rebundling of the data stack around trusted AI outputs instead of manual dashboard assembly. The winners will be products that collapse cleanup, semantic modeling, and governance into one faster workflow, so companies can move from years of plumbing work to months of usable AI deployment.