Implementation Burden Blocks Legal AI
Healthcare company associate GC on where legal AI products break down
The key bottleneck for lean in house legal teams is not the model, it is the implementation burden of turning a demo into a working daily workflow. In this interview, the associate GC describes Luminance as requiring too much setup and training to build useful playbooks from the team’s own contracts, while unreliable output still has to be checked by senior legal staff, which cancels out much of the promised leverage for a small team.
-
This shows why broad legal AI products often land first with large law firms and bigger legal departments. The same interview says smaller in house teams do many different jobs, need tools that fit existing routines immediately, and reject products that only work on the happy path.
-
Luminance’s product history helps explain the gap. It is built around document review, analysis, and contract workflows, and has grown quickly to an estimated $30M revenue in 2024. But that strength in structured review does not remove the customer work needed to configure playbooks, intake, routing, and exception handling inside a real CLM process.
-
The contrast with Harvey is that Harvey is perceived as closer to a general legal copilot, while Luminance is closer to workflow software. Harvey has scaled much faster, to an estimated $195M revenue in 2025, but this interview argues that for lean in house buyers, neither category wins unless it saves time on live matters without another layer of admin overhead.
Going forward, legal AI vendors will win less on raw model quality and more on out of the box deployment. The products that matter for in house teams will be the ones that can ingest prior contracts, flag deviations accurately, guide non legal users through intake and approvals, and recover cleanly when a workflow goes off script.