Shared Policy Engine for Data Mesh
Diving deeper into
Zachary Friedman, associate director of product management at Immuta, on security in the modern data stack
many are adopting a data mesh approach
Analyzed 4 sources
Reviewing context
Data mesh makes centralized table by table permissioning break at enterprise scale. In a large bank or pharma company, each business unit wants to govern its own data, but the company still needs one consistent way to decide who can see which rows and columns across Snowflake, Databricks, BigQuery, and more. That shift turns access control from an IT admin task into a distributed policy coordination problem.
-
Immuta fits this shift by acting as a shared policy engine above the data platforms. Teams define rules in business terms, then the software translates them into each warehouse's native controls, so analysts can keep querying the same tables without separate copies of the data.
-
The buying trigger gets stronger when a company has more than one data platform. A centralized team can brute force grants in one warehouse, but once five platforms and many domain owners are involved, syntax differences, duplicated rules, and audit gaps become expensive and slow.
-
This is different from companies like BigID and newer DSPM players like Teleskope. BigID starts with finding and classifying sensitive data across systems, while Teleskope emphasizes automated discovery and remediation. Immuta's core job is enforcing live access rules inside analytical data platforms during actual queries.
As more enterprises push self serve analytics and AI into every business unit, data ownership will keep spreading outward. The winners in data security will be the platforms that let local teams control access without forcing the company to give up a single policy language, a clean audit trail, or direct use of cloud warehouses.