Postgres and ClickHouse Duplication Problem
Product manager at Firebolt on on scaling challenges and ACID compliance in OLAP databases
This is really a wedge against the modern data stack, not just against ClickHouse. For a small software company, running Postgres for app writes and ClickHouse for analytics means copying the same customer, order, or event data into two systems, then keeping them in sync through pipelines that can break or lag. Once analytics are important but not mission critical enough to justify a dedicated team, the operational burden can outweigh the speed gains of a separate OLAP engine.
-
The practical failure mode is not just cost, it is workflow friction. Teams have to move data out of Postgres, model it again for ClickHouse, and then explain why a dashboard number does not match the app because one system updated before the other.
-
ClickHouse is strongest on big append heavy workloads like logs, metrics, and user events, where data arrives fast and mostly does not get edited later. That is why it wins in observability and embedded analytics, but mixed workloads with frequent updates expose the gap between analytical speed and transactional correctness.
-
Firebolt is trying to win this middle ground by selling one engine for both kinds of work. That pitch matters most for startups and lean teams, while larger companies still often accept a split architecture because the scale of analytics justifies a specialized system and the people to operate it.
The next battleground is reducing the tax of moving data between systems. Products that can keep analytical speed while handling updates, joins, and always current answers will keep pulling workloads away from the classic Postgres plus OLAP stack, especially as more companies want user facing analytics without hiring a database operations team.