Weave Estimates AI-Generated Code Share
Weave
This capability turns AI coding from a black box into something an engineering leader can audit, benchmark, and manage like any other software spend. In practice, Weave is not just counting pull requests, it is trying to map which code changes likely came from Copilot or chat based tools by combining repository diffs with signals from the editor, then tying that estimate to review quality, merge speed, and team level output. That makes AI adoption measurable at the workflow level, not just at the license level.
-
Most incumbent engineering analytics tools were built around DORA metrics, cycle time, and staffing allocation. Weave is part of a newer AI native layer that tracks things like lines of code written by AI and whether those changes move faster or create more defects, which gives it a different budget hook inside engineering orgs.
-
The hard part is attribution. GitHub says its Copilot usage metrics come from IDE telemetry and exclude some surfaces like GitHub.com chat, Copilot code review, and Copilot CLI. Cursor exposes per commit AI code tracking and accepted AI changes through an admin API. That means no single source of truth exists across tools, so a cross workflow estimator is valuable.
-
This also creates a data asset. If Weave can see, across customers, which teams get high AI generated code share without hurting review quality or post merge stability, it can move from dashboarding into playbooks, policy setting, and proof that AI seats are paying back.
The next step is for engineering analytics to become AI operations software. As GitHub, Cursor, and coding agents ship more native telemetry and review features, standalone products will need to win by combining fragmented signals across the toolchain and turning them into concrete recommendations about where AI helps, where it hurts, and which teams are using it best.