Data Engineers Turn dbt into Infrastructure
Tristan Handy, CEO of dbt Labs, on dbt’s multi-cloud tailwinds
Data engineers adopting dbt turned it from a helpful analyst tool into infrastructure that could sit inside the company’s production data workflows. That shift happened because dbt fit the way engineers already worked, in code, in the command line, and alongside tools like Airflow, while still letting teams keep business logic in SQL instead of rewriting it as custom pipeline code. Once engineers blessed dbt, it became easier for heads of data to standardize on it across larger teams.
-
The original wedge was the analytics engineer, a SQL heavy analyst who wanted to build tables without waiting on a data engineer. dbt gave that person testing, version control, and documentation, which made analyst written transformations safe enough for production use.
-
Data engineers came next because dbt slotted into existing engineering workflows. It is configurable, command line friendly, and works with orchestration tools, so engineers could treat dbt projects like software projects instead of babysitting fragile GUI based ETL tools.
-
That engineer adoption matters commercially because larger companies often buy through platform leaders, not individual analysts. dbt typically lands with one team, then expands across many teams and even across multiple warehouses inside the same enterprise, which makes neutrality a selling point against Snowflake and Databricks.
The next phase is broadening from engineer approved infrastructure into the default workflow for everyone who shapes data. If dbt keeps wrapping software engineering guardrails in interfaces that feel natural to analysts, it can own the shared layer where companies define tables, metrics, and data logic across clouds and across teams.