Preql enabling trusted data for LLMs
Leah Weiss, co-founder of Preql, on delivering clean data to LLMs
This is a wedge into AI budget, not just data cleanup budget. The point is that most companies already have a warehouse project that never fully finishes, plus critical business logic trapped across spreadsheets, dashboards, and team specific definitions. Preql is selling a way to turn that messy middle into something an LLM can safely use now, so the ROI shifts from saving analyst hours to unlocking AI products, copilots, and faster operating decisions years earlier.
-
In practice, the blocker is not raw storage, it is trust. A simple question like how revenue is trending can have multiple valid formulas across teams and tools. Preql’s pitch is to clean source data, map definitions, and route every AI query through that governed layer so results stay repeatable.
-
That is why semantic layers matter again. Earlier tools asked data teams to write and maintain extra code for business definitions, but the payoff was blurry because dashboards still lived in columns and rows. LLMs make the missing layer obvious, because bad definitions show up immediately as wrong answers.
-
The closest comparables sit above and beside this layer. Glean and Hebbia help users search and synthesize company information, but they generally assume the underlying data is already usable. Preql is trying to become the preparation and governance layer those applications can sit on top of, especially for finance and other high trust workflows.
The next step is from answering questions to taking action. If companies can standardize metrics and clean operational data in months instead of waiting for a perfect warehouse, AI moves from a demo in chat to a system that can trigger workflows in tools like ServiceNow or UiPath. That makes the trusted data layer a control point for the agentic enterprise.