Director of Innovation at large law firm on why firms adopt Harvey over Legora
Jan-Erik Asplund
Background
We spoke with the Director of Innovation at a large law firm who leads AI evaluation, piloting, and deployment across practice groups.
The conversation covers how firms actually adopt legal AI tools—from security reviews and vendor chemistry to license management, attorney resistance, and the evolving competition between platforms like Harvey, Legora, and CoCounsel.
Key points via Sacra AI:
- Large law firms are deploying Harvey over Legora primarily due to client pressure and brand recognition rather than product superiority, with clients specifically requesting Harvey by name the way they previously mandated specific e-discovery platforms, creating external adoption pressure that overrides internal product evaluations. "We have clients who push back hard against using AI tools—even when we walk them through the security measures and use cases, they still want things done manually. And we have clients on the other end who come in and say, 'You will use these tools—these are what we use in-house, and we expect you to use them too'... We're seeing the same dynamic now...Harvey is currently the tool getting the most attention from our attorneys—and some of that comes from their clients. Clients are pushing us toward Harvey because it's been more broadly promoted in the news, through partnerships and sponsorship deals, and that has led clients to specifically request that we use it."
- Neither Harvey nor Legora has achieved enterprise-scale deployment at large law firms because the cost of firm-wide licenses outweighs the benefit, resulting in both tools landing as practice group solutions with 5-20 seat deployments rather than the hundreds-of-seats rollouts vendors anticipated, fundamentally limiting their platform positioning. "Some vendors will see our name and our size and immediately think we're going to buy hundreds of licenses—dollar signs in their eyes. Those vendors tend not to last long in the conversation...The challenge is that the cost of bringing them on at true enterprise scale is prohibitive—it outweighs the benefit of firm-wide deployment. But I definitely see them becoming embedded at the practice group level, with targeted licensing for specific groups rather than across the whole firm."
- Practice-specific AI drafting tools built by teams that include former lawyers from target practice areas consistently outperform general platforms like Harvey and Legora because they ask the right follow-up questions and have practice-specific context that general models lack, suggesting meaningful limits to the "do everything" platform approach. "We've had to move toward more practice-specific tools for drafting—tools built by teams that include former lawyers in that particular space who have trained the models accordingly. Those tools ask the right follow-up questions to help move the draft forward. A larger general AI that handles drafting doesn't always have the practice-specific context it needs, and it doesn't ask the right clarifying questions to deliver a quality output."
Questions
- How would you describe your role in evaluating and deploying legal AI tools, and how directly are you involved in those decisions day to day?
- When you look at the legal AI landscape right now, do you see a real shift in how firms work, or is it still mostly hype with a few pockets of genuine usage?
- Which tools are actually getting real sustained usage for you, and which ones get a lot of attention but don't end up seeing much genuine adoption?
- What does a serious evaluation actually look like on your side—who's involved, what gets tested, and what makes a tool clear the bar for a pilot or broader rollout?
- What criteria matter most once you're in that evaluation—is it output quality, workflow fit, security, pricing, ease of use, or something else that tends to make or break the decision?
- How much do client data confidentiality requirements narrow the field in practice—are there tools you rule out early because the data posture just doesn't work for a large firm?
- How long does an evaluation usually take from first meeting to an actual deployment decision, and what tends to slow it down most?
- When a vendor is pitching a large firm, what questions or signals tell you they really understand the law firm environment versus just showing a polished demo?
- What's the pattern you've seen, across these waves of legal tech, that separates tools that actually get used from the ones that get purchased and then quietly die on the shelf?
- What does attorney resistance usually look like when a new tool comes in, and what actually changes behavior enough to get people using it consistently?
- How does the billable hour model show up in that behavior change—where do you see it helping adoption versus creating real friction once attorneys are deciding whether to use the tool on live matters?
- When you need partner buy-in, what does that look like in practice—is it mostly getting one influential partner to champion it, or can associate-level pull ever be enough on its own?
- Once a tool is deployed, what does your team actually do to drive usage day to day—what does rollout and ongoing adoption management look like in practice?
- When a tool really becomes embedded in how attorneys work, what do those successful ones have in common—what are the traits that make them stick?
- Which legal AI use cases are actually getting consistent day-to-day traction for you—the ones that have become part of real work rather than just sounding good in a demo?
- How does contract review and drafting actually play out once you try to operationalize it at scale—where does it work smoothly, and where does it start to break down inside the firm?
- On AI-assisted research, what does it actually look like in practice for attorneys at a large firm, and where does it hold up versus fall short?
- Are there use cases that get a lot of vendor attention but consistently underdeliver once you try to put them into real law firm workflows?
- Have there been any use cases that surprised you the other way—not heavily marketed, but once attorneys got their hands on them, they drove real engagement?
- Which practice groups are seeing the strongest AI traction day to day, and what is it about their workflow that makes adoption easier there?
- Are there practice groups where adoption is consistently slower or more resistant, and what tends to explain that?
- How much does seniority shape adoption in those groups—are associates usually the ones driving real usage, or do you still need a partner-level champion for it to stick?
- What does it look like when one of those attorneys becomes a real champion for a tool—how does that actually spread inside a practice group?
- How do you measure whether a tool is being used in a meaningful way versus someone just logging in occasionally to say they tried it?
- What thresholds do you actually look for—how many conversations or completed projects make you say this is real adoption versus just light experimentation?
- When you compare Legora and Harvey, which one tends to get stronger attorney adoption once it's deployed, and why?
- When you put them side by side, what's the fundamental difference in what Legora and Harvey are actually built to do inside a large firm?
- If a large firm is evaluating both, what tends to push it toward one versus the other—is it practice fit, geography, integrations, partner demand, pricing, or something else?
- On actual legal work output—drafting, review, research, analysis—where do you see the biggest quality differences between Legora and Harvey?
- How do pricing and the commercial model factor into that choice in practice—especially when you're deciding how many seats to give each versus keeping one broader and one more targeted?
- Do you think the Harvey versus Legora comparison is really apples to apples, or are they going after meaningfully different problems and different buyers inside the firm?
- On the law firm versus in-house point, how different are the evaluation criteria and adoption dynamics when you talk to legal teams outside firms?
- Do your in-house counterparts think about tools like Harvey and Legora the same way you do, or are the workflows and buying criteria different enough that the same product can land very differently?
- Do client pressures materially shape your AI decisions now—are clients asking outside counsel to use these tools, or in some cases pushing back on AI being used on their matters?
- When clients push you to use a specific tool, how much does that override your normal evaluation process—do you still run the same security and workflow review, or does client demand speed certain products through?
- When you look across the vendor landscape more broadly, what separates the companies that are building something durable from the ones that are mostly riding the current wave?
- Do you think tools like Harvey and Legora are building toward real platform positions inside firms, or do they eventually get absorbed into broader enterprise AI and legal research stacks?
- What would make one of these tools genuinely sticky at a firm—the kind of product that becomes embedded in daily attorney workflow instead of getting swapped out when the next wave comes along?
- How do you see the competitive landscape evolving over the next two to three years—are Harvey and Legora pulling away, or is there still real room for other tools, including more specialized ones, to break through inside large firms?
- What do outsiders most consistently get wrong about how legal AI actually works inside a large firm—especially from the deployment and change management side rather than the product demo side?
Interview
How would you describe your role in evaluating and deploying legal AI tools, and how directly are you involved in those decisions day to day?
As a director of innovation, it is my job to evaluate new technology that is out there, especially in the AI space—evaluate it and then hopefully bring it to a pilot stage, monitor the pilot, and then move it to next steps within the organization, whether that's an enterprise release or managing it as a practice solution. I have a team that manages that.
When you look at the legal AI landscape right now, do you see a real shift in how firms work, or is it still mostly hype with a few pockets of genuine usage?
The landscape is changing in a lot of ways. A lot of it comes down to how legal teams are billing—whether it's flat-fee billing or traditional billing—and whether we're now trying to adapt as we use tools that are going to speed up the process of doing work. Do we try to transition those to either subscription models or flat-fee work, and hope that the AI tool is the secret sauce to turning a profit, while also accounting for the cost of those tools?
Adoption at our firm has been great. We've brought on a handful of different products, and we were among the first to come to market with our own internal AI tool that is connected to our internal systems and some external systems. As we start going down the path of MCPs, we're trying to get more and more teams to look into using MCP so that we can connect our tools and use them all in one centralized location.
Which tools are actually getting real sustained usage for you, and which ones get a lot of attention but don't end up seeing much genuine adoption?
We've been very lucky so far in hitting home runs on a handful of different practice solution AI tools, and we've been smart about our licensing—we didn't over-license. What we've been able to do is focus on making sure those licenses are constantly in use. We have agreements with all of our vendors that allow us to hot-swap licenses, so we're always ensuring they're in current use. If an attorney leaves, we make sure the next person is up. If an attorney has finished a project and is no longer going to use a particular tool, we take it back and hand it off to another attorney. We've been really focused on making sure that while we're paying high premiums for some of these strong tools, they're being actively used at all times—never sitting on the shelf.
At the same time, being an early mover with our own homegrown tool has given us something that's readily available for anyone to use at any given moment. They can bring up our platform, chat with it, do drafting, communicate with our internal systems, find firm information—and that has allowed us to maintain a product at a lower price point than subscribing to some of the bigger AI tools on the market, which can be a heavy cost burden if they're not being used by everybody.
What does a serious evaluation actually look like on your side—who's involved, what gets tested, and what makes a tool clear the bar for a pilot or broader rollout?
It depends on what the tool is. If it's a broad platform—something like a Harvey or Legora—many different teams are looking at it: practice department leaders, our e-discovery team, our innovation team on the AI side, our IT team, and our knowledge management team making sure it can handle their type of work. For smaller, practice-specific solutions, it's the practice department, the attorneys, and the innovation team evaluating together.
We start with a small pilot, survey users on the value, and if there are issues, we bring them to the vendor to see if they can be fixed. If not, we move on. We also run a full security review on any tool. We tend not to work with a lot of startup vendors—we mostly use vendors who have established themselves in the market. And we pay attention to who's backing these solutions, doing a bit of digging to make sure they're properly funded.
What criteria matter most once you're in that evaluation—is it output quality, workflow fit, security, pricing, ease of use, or something else that tends to make or break the decision?
It's a lot. From my perspective, it starts with making sure the vendor is going to help us train users to the best of their ability, with ongoing sessions for our users in each practice. If an attorney or paralegal needs training at any given moment, they should be able to schedule one-on-one sessions. We also want group trainings available, and we want any updates to the tools shared with us as soon as possible. We focus heavily on training, word-of-mouth, and making sure attorneys are talking about the tool—we work on case studies with all of our vendors to get those stories out to the team.
Then there's cost. We buy a low number of licenses upfront with ramp-up costs built in, plan carefully, and try to negotiate multi-year contract terms in advance if the tools take off. We're also always in communication with vendors about what new features are coming, since those aren't always packaged into the base tool—they sometimes come as module releases—and we want to make sure we have the best pricing for those.
How much do client data confidentiality requirements narrow the field in practice—are there tools you rule out early because the data posture just doesn't work for a large firm?
Absolutely. If the security isn't top-notch, if we're not living in our own private environment, if there's any training on our data, if data can leave our walls in any way—those are all disqualifying. That's why we can't use a lot of the fly-by-night solutions. We do not allow any freemium AI tools to be used with client data and block all of them internally. Tools like Claude we may use for AI training purposes, but not for legal work.
How long does an evaluation usually take from first meeting to an actual deployment decision, and what tends to slow it down most?
In most cases, we're looking at six months.
The thing that slows it down most is security—it's the most time-consuming part of the process. Our security team isn't the largest, and they have a lot on their plate beyond evaluations, constantly making sure our environment is safe. After that, procurement can also take time depending on how much groundwork was laid before bringing them in, plus all the legal documents and contracts that need to be redlined. That back-and-forth can take several weeks on its own. So unless there's an immediate push or urgent need, we're generally looking at about a six-month process from the first demo where we decide to move forward through to onboarding the pilot.
When a vendor is pitching a large firm, what questions or signals tell you they really understand the law firm environment versus just showing a polished demo?
It comes down to saying the right things and knowing who they're working with. Some vendors will see our name and our size and immediately think we're going to buy hundreds of licenses—dollar signs in their eyes. Those vendors tend not to last long in the conversation. The vendors that resonate are the ones who come in explaining how they're going to onboard our users, how they're going to hold our hands through the process, and who understand this isn't an enterprise-scale purchase out of the gate—we start small and work our way up.
Beyond that, it's chemistry. If there's no chemistry with the vendor, those tools have never really taken off for us. Whether it's the salesperson or the founder, you have to feel the right connection, because not only are they selling you the tool, they're going to help you sell it internally. You want to make sure you can bounce things cleanly back and forth. It's almost like a dating process—the chemistry has to be right.
What's the pattern you've seen, across these waves of legal tech, that separates tools that actually get used from the ones that get purchased and then quietly die on the shelf?
We're very focused on making sure we don't buy tools that die on the shelf, and that's largely a function of how we structure our licensing. We buy small—never large amounts—and in most cases we don't sign anything more than a one-year agreement upfront. If we can get a six-month deal, even better.
The idea is that we want to know we have users ready to use the tool from day one, and if those initial users don't follow through, we have a backup set ready to go. We did experience that with one tool earlier this year—about half a dozen junior associates who said they couldn't live without it. We went through the full security review, ran the pilot, and then once we were past the trial and ready to onboard, they weren't using it. So we went out aggressively and demoed the tool to different practice groups, both with the vendor and internally. We ended up nearly quadrupling the number of licenses after a year. You always have to make sure you're covering all your bases.
What does attorney resistance usually look like when a new tool comes in, and what actually changes behavior enough to get people using it consistently?
It's mostly about time. We try to identify who is best suited to see these products—we may introduce a tool to a partner, but we want them to identify the people on their team who are actually going to use it, so the partner isn't sitting on a license that goes unused. The associates who do the day-to-day work live and breathe with the tool, while the partner reviews the end product. But getting the partner bought in first is critical—once they see the value and make it part of the workflow, they bring their associates along.
Then it's about aggressively making sure those teams are actually using it. If they're not, we ask why. Is it something that could be fixed on the vendor side? Is it a workflow issue on our end? And sometimes it's a client approval issue—the client hasn't signed off on the use of AI tools on their matters. In that case, we need to communicate with that client about the value of the tool and why they should allow us to use it.
How does the billable hour model show up in that behavior change—where do you see it helping adoption versus creating real friction once attorneys are deciding whether to use the tool on live matters?
Once we get client approval and clear the security review, we want to make sure we're using the tool on live deals. Sometimes attorneys will take a previous matter first and run it through just to see the outcomes, and once they're pleasantly surprised by what comes back, they tend to jump right to using it on live work.
When you need partner buy-in, what does that look like in practice—is it mostly getting one influential partner to champion it, or can associate-level pull ever be enough on its own?
In a lot of cases, tools come from the partner side first—either they've seen something at an event and brought it to us, or we've brought it to them. It may sit in their awareness for a while, and then they come back saying they want to look at it or start using it. Partners also talk to their counterparts at other firms, so we hear things like, "This firm is using such-and-such tool—can you look into it?" Whether it's that specific tool or something comparable we've already seen, we do the due diligence and bring them the options. The goal is to get it into the hands of their legal team—we're sponsoring it on the partner's behalf.
We have had associate-driven adoption as well. But even then, you still want to find an MVP partner who sees the tool and says, "Yes, all my associates should be using this." That champion at the partner level makes everything stick faster.
Once a tool is deployed, what does your team actually do to drive usage day to day—what does rollout and ongoing adoption management look like in practice?
We make sure we have access to usage reports, whether that's a monthly report or daily visibility into the data. We run feedback surveys to find out who's using the tool, who isn't, and why. Is it something that can be fixed on the vendor side? Is it a workflow issue on our end? And we're constantly communicating with practice groups to let them know a particular tool exists and to start building a waitlist of attorneys who want access. That way, when we see a license going underused, we can flip it to someone who's ready. We're always focused on not over-saturating ourselves with too many licenses while keeping utilization as high as possible.
When a tool really becomes embedded in how attorneys work, what do those successful ones have in common—what are the traits that make them stick?
The signal we love most is hearing an attorney say they can't live without it. We've been fortunate over the years to roll out tools—both AI and traditional practice solutions—where attorneys have genuinely lived in the product and become vocal advocates, sharing it with their teams and talking about it in meetings. Sometimes we'll hear that an attorney told a client they're using a particular tool, and the client's own legal team has looked into adopting it based on what we've shown them. That kind of organic word-of-mouth is the ultimate sign that a tool has truly stuck.
Which legal AI use cases are actually getting consistent day-to-day traction for you—the ones that have become part of real work rather than just sounding good in a demo?
We constantly remind users that there is no magic button with any tool. That said, the biggest area of traction is practice-specific drafting—being able to get that first draft completed and remove what I call the white-space syndrome. I suffer from it myself. You can stare at a blank page and have no idea where to start. With AI, you throw in a few words and at least get a starter sentence or paragraph that gets the ball rolling, and then you're off. We see that consistently across different practice areas—sometimes using a practice-specific tool to start that draft, and sometimes just using a general AI tool where you upload some precedents, instruct it to draft against them, and get back the first page or two.
How does contract review and drafting actually play out once you try to operationalize it at scale—where does it work smoothly, and where does it start to break down inside the firm?
It's a tricky question. Contract review can be done in different ways. There are traditional AI tools like Akira, where you upload a document and it runs an evaluation, pulling out different clauses based on what you're trying to do. Then there are more GenAI-based contract review tools that will review, summarize, and extract—something like Harvey, which can put it into a tabular format and extract information based on your queries. Both approaches have been successful. There are users who prefer to write their own prompts to pull exactly the clauses they want, versus users who prefer a traditional AI tool that does it automatically based on the vendor's continuously updated knowledge base. Both have their place.
On AI-assisted research, what does it actually look like in practice for attorneys at a large firm, and where does it hold up versus fall short?
Traditionally, legal research has always been done through Boolean search—some attorneys are very good at it and others are not. Those who aren't have had to rely on librarian teams to find what they're looking for. With AI attached to research tools, you can speak naturally to find what you need, which makes it much more accessible. The speed is also dramatically faster than Boolean search. Once you've narrowed something down, you can keep narrowing further, doing more targeted searching and summarization. And some tools now allow you to take that legal research, go straight to drafting, or use the research to populate citations and verify that you have the correct references.
Are there use cases that get a lot of vendor attention but consistently underdeliver once you try to put them into real law firm workflows?
We've seen some vendors claim their tool is the best at all drafting across all use cases, and that hasn't held up. We've had to move toward more practice-specific tools for drafting—tools built by teams that include former lawyers in that particular space who have trained the models accordingly. Those tools ask the right follow-up questions to help move the draft forward. A larger general AI that handles drafting doesn't always have the practice-specific context it needs, and it doesn't ask the right clarifying questions to deliver a quality output.
Have there been any use cases that surprised you the other way—not heavily marketed, but once attorneys got their hands on them, they drove real engagement?
A handful of tools that were originally designed for lawyers have turned out to be really successful for business development. We've been able to take prospective work we might pursue for a particular client, use AI to get a head start on it, and walk into a pitch already six steps into the process. That's been a pleasant surprise—using these tools to come to pitches prepared in a way we weren't able to before.
Which practice groups are seeing the strongest AI traction day to day, and what is it about their workflow that makes adoption easier there?
The practice group where it makes the most sense has been transactional. Transactional teams have been deprived of legal technology for years, and AI has really changed that landscape for them. They were previously very old school in how they did work, and AI has allowed them to move much faster. Litigation teams, on the other hand, have been around legal tech for a long time—they're not afraid of AI tools, they're just benefiting from what they already do but with better results now that AI is in the mix.
Are there practice groups where adoption is consistently slower or more resistant, and what tends to explain that?
We've seen that in practice groups like immigration, where a lot of the work is basic but time-consuming, it's hard to get people out of their established systems. They've got their process down cold, so it's about convincing them to change for the better because we can create more efficient workflows for them. We've seen the same thing with our multifamily housing group—but that has changed over time. They are now much more open to using AI, and as they've seen other parts of the team succeed, the remaining holdouts have come around.
How much does seniority shape adoption in those groups—are associates usually the ones driving real usage, or do you still need a partner-level champion for it to stick?
It's a combination of both. Getting partner buy-in so that associates use the tool is really important. But we've also seen associates save time or work faster and smarter, and then start sharing the tool with peers. Associates are competitive—if one person has an edge, everyone wants it. So we see a lot of organic spread at the associate level as well.
What does it look like when one of those attorneys becomes a real champion for a tool—how does that actually spread inside a practice group?
It's getting them to spearhead conversations and be front and center during presentations—whether that's in front of the policy committee, at a partner retreat, through an internal podcast, or in internal case studies we might share. The goal is to make sure they're sharing their success with the tool across the firm.
How do you measure whether a tool is being used in a meaningful way versus someone just logging in occasionally to say they tried it?
It's about looking at the numbers. With AI tools, it's not just whether they've logged in—it's what they've done with the tool. If we can see how many conversations they've had, how many completed projects they've worked through, that's the data we need to know we're backing the right tool.
What thresholds do you actually look for—how many conversations or completed projects make you say this is real adoption versus just light experimentation?
We compare users against each other. The heaviest users set the benchmark, and then we look at everyone else relative to that. If the top user has had a hundred conversations, we want to see others in the fifty-to-seventy-five range. If someone is below that, we go to them directly and ask why—is it something with the tool, dissatisfaction, or workload? We also work with vendors to compare our numbers against other law firms using the same tools. If we're not keeping pace with our peers, we know we're not doing it right.
When you compare Legora and Harvey, which one tends to get stronger attorney adoption once it's deployed, and why?
Harvey is currently the tool getting the most attention from our attorneys—and some of that comes from their clients. Clients are pushing us toward Harvey because it's been more broadly promoted in the news, through partnerships and sponsorship deals, and that has led clients to specifically request that we use it. If it's a matter of keeping a client or keeping an attorney who needs the tool, we're seeing more of that pressure on the Harvey side than the Legora side.
That said, we maintain relationships with both vendors. We have licenses currently for Harvey and negotiated deals in place with Legora. We see a space for both in our ecosystem—it's not one or the other, it's just a matter of how much of each.
When you put them side by side, what's the fundamental difference in what Legora and Harvey are actually built to do inside a large firm?
We've seen that running parallel workflows with multiple agents is stronger in Legora than in Harvey. Legora is also a much stronger platform when it comes to international jurisdiction law, being a European company, while Harvey is more US-based. Some of the challenges with Legora are that it's not as well integrated with legal technology tools like iManage, and it still doesn't carry the same name recognition in the US that Harvey does.
If a large firm is evaluating both, what tends to push it toward one versus the other—is it practice fit, geography, integrations, partner demand, pricing, or something else?
It's all of those things. If you've got a significant European presence, you're probably going to lean toward Legora. If you're primarily US-based, Harvey's stronger name recognition matters. Practice-specific features differ between them as well. But I think there's a world where both tools fit within the same law firm ecosystem—I don't see it as one or the other anymore, the way I might have a year ago. It's really about how many licenses of each and managing that mix. And you also have to remember the third player in this space: Thomson Reuters CoCounsel, which is developing against both of them and has legal research capabilities that Harvey and Legora don't own themselves.
On actual legal work output—drafting, review, research, analysis—where do you see the biggest quality differences between Legora and Harvey?
When it comes to drafting, so much depends on what you put in—if you don't provide the right data, you won't get back a quality draft. That said, Legora has better knowledge vaults than Harvey, allowing you to upload more documents so it can draft against them and produce better output. Harvey is still developing that capability.
How do pricing and the commercial model factor into that choice in practice—especially when you're deciding how many seats to give each versus keeping one broader and one more targeted?
They've become very competitive with each other, and both are now quite flexible on pricing and licensing—I wouldn't have said that a year ago. You can get as few as five licenses per product, and where you used to only be able to get a one-year term, I'm hearing that six-month terms are being offered to large law firms just to get their name through the door and help demonstrate traction to investors. Both tools have a lot of options available now.
Do you think the Harvey versus Legora comparison is really apples to apples, or are they going after meaningfully different problems and different buyers inside the firm?
It will always be apples to apples. One might pull ahead on a specific capability for a period, but by the next day, they'll both have the same thing—they're going neck and neck and trying to stay as comparable as possible. The only real differentiator that persists is that Legora is still very much a more international tool, while Harvey remains more US-focused.
On the law firm versus in-house point, how different are the evaluation criteria and adoption dynamics when you talk to legal teams outside firms?
In-house teams are very budget-conscious and purchasing on a much smaller scale. Their security requirements are also not as stringent as a law firm's, so they can be more nimble. They're looking at these tools primarily as a way to reduce their reliance on outside counsel. We, on the other hand, are looking to save time at the beginning stages of work—the AI handles the heavy lifting, and the human provides the polished final touch.
Do your in-house counterparts think about tools like Harvey and Legora the same way you do, or are the workflows and buying criteria different enough that the same product can land very differently?
As I said, in-house teams are looking at these tools as a potential replacement for what they currently send to law firms. But in many ways, we're still looking at the tools for the same underlying purposes—it's the scale, the budget constraints, and the security posture that differ most.
Do client pressures materially shape your AI decisions now—are clients asking outside counsel to use these tools, or in some cases pushing back on AI being used on their matters?
Absolutely, on both sides. We have clients who push back hard against using AI tools—even when we walk them through the security measures and use cases, they still want things done manually. And we have clients on the other end who come in and say, "You will use these tools—these are what we use in-house, and we expect you to use them too." It reminds me of the early e-discovery days, when certain clients would tell us we had to use Relativity or specific platforms. As those clients got comfortable with particular tools, they wanted us to be comfortable with them too. We're seeing the same dynamic now.
When clients push you to use a specific tool, how much does that override your normal evaluation process—do you still run the same security and workflow review, or does client demand speed certain products through?
It does speed up the process. Client pressure moves a tool to the front of the review queue. But the same process stays in place—the same security review, the same procurement review, all of it. It just jumps the line.
When you look across the vendor landscape more broadly, what separates the companies that are building something durable from the ones that are mostly riding the current wave?
The durable ones have a real team in place. The wave-riders are fly-by-night operations that may not be there tomorrow. The durable tools are where we focus our attention, while we keep a watchful eye on the others—hoping they find the right backing, or get acquired by a larger vendor. There was a tool I was excited about recently that was still early-stage. I actually tried to connect them with a more established vendor I thought would be a good fit—it didn't quite work out—but eventually that tool found funding, has since rebranded, and I'm looking forward to seeing where they go.
Do you think tools like Harvey and Legora are building toward real platform positions inside firms, or do they eventually get absorbed into broader enterprise AI and legal research stacks?
At this point, their valuations are too high for that. They're in it for the long run, and if anything, they'll be the ones looking to acquire legal research capabilities—not get acquired themselves.
What would make one of these tools genuinely sticky at a firm—the kind of product that becomes embedded in daily attorney workflow instead of getting swapped out when the next wave comes along?
I don't think either of these tools is going to get swept out. The challenge is that the cost of bringing them on at true enterprise scale is prohibitive—it outweighs the benefit of firm-wide deployment. But I definitely see them becoming embedded at the practice group level, with targeted licensing for specific groups rather than across the whole firm.
How do you see the competitive landscape evolving over the next two to three years—are Harvey and Legora pulling away, or is there still real room for other tools, including more specialized ones, to break through inside large firms?
The more specialized tools on the market will absolutely continue to grow. They're more nimble, they're more purpose-built, and they're deepening their expertise in a particular practice area every day—while Harvey and Legora are trying to be a little bit of everything. That generalist approach is perfect for what they're doing, but it leaves room for specialists.
I think CoCounsel, and VinciAI as well, will eventually reach the same tier as Legora and Harvey—it's just a matter of time. CoCounsel has a new release coming soon that could potentially put it at the same level, and they come with Thomson Reuters' name recognition plus their own legal research, which is something Legora and Harvey don't natively own. Then there's VinciAI, now owned by Clio, which will focus more on solo practitioners and small firms and will eventually try to work its way up to larger ones.
What do outsiders most consistently get wrong about how legal AI actually works inside a large firm—especially from the deployment and change management side rather than the product demo side?
A lot of vendors think it happens overnight. They show you a demo and expect to be live in your firm the next day. When you tell them it's going to be six to eight months before a tool is even live to an enterprise audience, that usually comes as a shock.
Disclaimers
This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

