Preparing Enterprise Data for LLMs
Leah Weiss, co-founder of Preql, on delivering clean data to LLMs
This marks a shift from data tools built for analysts writing SQL to data systems built so an AI can answer the same business question the same way every time. In the dbt era, teams wanted logic in code that humans could read and review. In the AI era, the bottleneck is turning messy spreadsheets, warehouse tables, and conflicting metric definitions into a machine legible map that an LLM can use without hallucinating or changing answers.
-
Preql is not mainly replacing dashboards with chat. It is trying to replace the hidden manual work behind dashboards, the spreadsheet cleanup, ID formatting fixes, and back and forth over what revenue or margin actually means, with agents that clean data and build a semantic model.
-
That is why this differs from dbt and earlier semantic layer tools. Those products asked data teams to write more code to encode business logic. Preql is aimed at companies where business context lives outside the warehouse, often with finance teams, and has to be pulled into a governed layer before AI tools like Glean or Hebbia can work reliably.
-
The commercial implication is that the value shifts from analyst productivity to AI readiness. Preql sells faster time to a trusted answer engine, months instead of multi year warehouse cleanup, and positions itself as infrastructure for Teams bots, BI tools, and internal agents rather than as another standalone analytics surface.
Going forward, the winners in data infrastructure are likely to be the companies that make enterprise data deterministic enough for AI to act on, not just summarize. That pushes the market toward semantic layers, governance, and workflow integrations that can sit between raw company data and the next generation of copilots and autonomous systems.