LinearB Metered AI Code Review
$16M ARR Amplitude for AI code quality
LinearB’s pricing matters because it turns engineering analytics from a fixed seat sale into a rising spend curve tied to customer workflow depth. The dashboard fee gets the product installed across a team, then credits expand revenue when customers let LinearB act inside pull request workflows, by routing reviewers, generating PR descriptions, reviewing code, or auto approving low risk changes. That makes the product look more like a lightweight developer system of action than a passive reporting tool.
-
The base seat price is intentionally easy to adopt. LinearB’s model charges per contributor for visibility, then meters automation actions through credits, so a team can start with DORA dashboards and spend more only when it turns on workflow bots and AI features inside GitHub, GitLab, Slack, or Teams.
-
This is a clear product and monetization split versus Jellyfish. Jellyfish sells larger annual subscriptions, around $95,000 on average, to engineering leaders and finance teams for planning, capitalization, and executive reporting. LinearB pushes further into day to day pull request operations, where every automated action can become a billable event.
-
It also shows how incumbents are repositioning for the AI coding era. Weave still prices mostly per engineer, around $25 to $40 per seat, around AI usage analytics. LinearB is moving one layer deeper, from measuring AI assisted development to charging when AI actually changes the code review and merge workflow.
The next step is a fuller transition from dashboard vendor to metered workflow platform. As more engineering teams adopt AI coding assistants, the biggest revenue upside will come from owning the approvals, reviews, and policy checks wrapped around AI generated code, because that is where usage compounds faster than headcount.