Weave's AI Measurement Wedge
Weave
This market is likely to compress into a few broader platforms, because basic DORA dashboards are becoming cheap and common while AI measurement is still scarce. Weave is using that gap to sell something more specific than generic engineering analytics, namely who is using AI coding tools, how much code they generate, and whether that changes review speed, output, and defect patterns. That makes it easier to win budget before broader consolidation happens.
-
The crowded part of the market is the workflow metrics layer. Swarmia, LinearB, and similar tools already track deploy frequency, lead time, and review bottlenecks, and GitLab now bundles DORA and value stream dashboards into its own platform. That pushes standalone vendors toward lower prices or narrower positioning.
-
Weave is trying to own the new measurement layer created by AI coding. Its product analyzes pull requests, code diffs, and editor metadata to estimate AI generated code share, classify work types, and connect that to manager dashboards. That is more concrete than simple Copilot adoption counts and gives finance and engineering leaders a clearer ROI story.
-
Consolidation usually favors tools that can expand from one dashboard into a system of record. Weave starts as analytics, but the adjacent path is coaching, workflow recommendations, governance, and benchmarking. That mirrors how broader vendors like Jellyfish and LinearB moved from metrics into planning, automation, and executive reporting.
The next phase is a race to become the control plane for AI enabled engineering. Vendors that only show historical DORA numbers will get absorbed, bundled, or squeezed. Vendors that can measure AI use at the code level, prove business impact, and layer on actions and governance will capture the larger platform role.