dbt Empowers Analysts Through Open Core
Julia Schottenstein, Product Manager at dbt Labs, on the business model of open source
dbt turned data transformation from a gated engineering task into a SQL workflow that business facing analysts could safely ship themselves. The key move was not removing software discipline, but packaging it into a simpler path, where an analytics engineer writes SQL models, tests them, reviews changes in Git, and then materializes trusted tables in the warehouse without waiting on a separate data engineering team.
-
Before dbt, analysts often depended on data engineers or bulky ETL tools like Informatica and Talend to get production tables built. dbt gave SQL first users version control, testing, documentation, and dependency management in one workflow, which let the person closest to revenue, marketing, or product logic encode that logic directly.
-
That workflow created the analytics engineer role. This person sits between analyst and data engineer, knows the business questions, but can also manage production grade models. dbt won by serving that middle persona, not by replacing hardcore pipeline tools like Airflow or ingestion tools like Fivetran.
-
The business model follows the same split. dbt Core stays open as the shared language for defining transformations, while dbt Cloud sells the surrounding workflow, IDE, CI, scheduler, hosted docs, governance, and collaboration. It is the Git and GitLab pattern applied to warehouse transformation.
The next step is that more business logic moves into dbt, from tables to metrics and semantic definitions, which makes dbt harder to displace. As warehouses become the system of record for more operational and customer facing use cases, the control point shifts toward the layer that defines trusted logic once and serves it everywhere.