Legal tech VP of cloud operations on evaluating legal AI tools
Jan-Erik Asplund
Background
We spoke with a VP of cloud operations at a large legal tech company where he leads product and platform strategy for legal and financial software globally. The conversation covers how European enterprise legal organizations evaluate AI vendors like Legora and Harvey across accuracy, data residency, workflow integration, and the shift from AI copilot to autonomous agent.
Key points via Sacra AI:
- Harvey and Legora have genuinely distinct positioning in the legal AI market, with Harvey best-in-class for legal reasoning, drafting quality, and LLM-native innovation pace, while Legora is best-in-class for structured workflow UX, team collaboration, end-to-end contract cycles, and ease of adoption at scale across broader legal teams. "Harvey is very much an American company, and their value proposition is strongly centered on AI as a copilot for legal reasoning, drafting, and research, built on top-tier LLMs. In terms of reasoning, they are the best in class. Drafting quality is strong... Legora is a small company, very focused on workflow-centric legal AI. It's a platform that prioritizes team productivity and usability, and it has a strong UX, genuinely easy to use. The workflow structure is better than Harvey's in my view. Collaboration features are strong, and the ease of adoption across teams is a real strength."
- Legora has a real structural advantage over Harvey with European enterprise legal buyers because of its European-native architecture, better alignment with civil law systems, and much greater transparency about data residency and processing location, while Harvey's architecture is opaque enough to create real adoption barriers with enterprise IT departments that require architectural clarity before approving a vendor. "Legora, being a smaller company, is able to provide clear insight into how their architecture is designed and where the data resides. You get a clearer picture of the system's structure and better observability when running the software in production. Harvey is quite opaque by comparison, it's unclear where the data stays, what the deployment model looks like, and whether fault tolerance is properly in place. That opacity can be a significant adoption barrier, especially for enterprise IT departments that require architectural transparency before approving a vendor."
- Long-term defensibility for legal AI products depends on developing deep user and customer intimacy around habits, culture, and day-to-day working patterns over time, and the inevitable trajectory is from copilot to agentic AI that takes decisions and actions autonomously, eventually including multiple agents collaborating with each other. "The more they know the customer, their habits, culture, sentiment, and day-to-day working patterns, the harder they become to replace. They need to develop intimacy with the end user and the end customer... Both paths are possible, a product can be a smart assistant or copilot, or it can evolve into agentic AI that takes decisions and actions with increasing autonomy in day-to-day work. We can shift from copilot to self-assisted, and from generative AI to agentic AI. That is what we are going to witness more and more in this field."
Questions
- To start, how would you describe your current vantage point on legal software and AI, and how closely are you watching what's happening with products like Legora and Harvey?
- When you look at AI in legal software right now, do you frame it as a fundamental shift in what enterprise legal software needs to be, or more as a new layer being added on top of existing systems and workflows?
- Where does it become a true architectural shift rather than just an embedded feature inside existing practice or matter systems?
- When a large legal organization decides whether to use a product in real production—not just a pilot—what matters most? Things like uptime, security, data handling, integration, service quality, and overall platform trust.
- When that "quick and dirty" mindset meets a large law firm or enterprise legal team, where does it break first in practice? Is it usually security review, integration burden, output reliability, procurement, or something else?
- What are the non-negotiables that still have to be true before a legal AI product can be trusted with sensitive legal work in Europe?
- How do you evaluate accuracy in practice for legal AI? What do you actually look for around hallucination rates, citation quality, consistency, and whether the system can be trusted across real legal workflows?
- How important is integration with the systems lawyers already use—document management, matter management, practice management—in determining whether a legal AI tool becomes part of daily work or stays a peripheral assistant?
- If a legal AI tool is going to be used every day, what does it need to plug into? And if those connections are poor, what problems show up first?
- When a legal AI product tries to sell into large European legal organizations, what are the main friction points that slow adoption down? Is it mostly GDPR and data residency, or more about procurement culture, security review, and trust in the vendor?
- For European buyers, what matters most on data residency? Is it only that the data is stored in Europe, or also that the data is processed in Europe, the AI runs in Europe, and only European staff can access it?
- When those buyers do a security review of an AI vendor, what does that process usually look like in practice, and where do younger AI companies most often fail it?
- How do you see Legora and Harvey specifically on the data residency and trust dimension? Do they look structurally credible for European enterprise legal environments, or do they still create trust friction?
- On Harvey, how do you see the contrast? Where does Harvey look stronger than Legora, and where does it create more friction for a large European buyer?
- Where does Harvey look weaker operationally for Europe?
- If you put them side by side for a large European legal organization, what would push a buyer toward Legora versus Harvey—factoring in workflow fit, integration needs, data and compliance concerns, and how easy each is to trust in production?
- On the operational side specifically—data handling, security posture, deployment model, and service quality—where do you see the biggest real difference between them today?
- Do you think products like Legora and Harvey are building toward true platform positions, or are they still point solutions that could get absorbed into broader legal software systems of record?
- What would Legora or Harvey need to become really hard to remove inside a large European legal organization—the kind of thing that makes a legal AI product core infrastructure rather than a useful tool that could be swapped out later?
- How much of that defensibility depends on integration with the core legal stack—document management, matter management, contract systems, knowledge bases—versus the AI experience itself? If the AI doesn't connect deeply into those systems, can it still become core infrastructure, or does it stay a smart assistant on the edge?
- What has to be true before a large European legal organization will trust that shift from copilot to agent? Is the main blocker accuracy, governance, auditability, liability, or something else?
- What has to mature first—is it mainly auditability and human oversight, or is the harder problem proving reliable performance and clear accountability when an agent takes action?
Interview
To start, how would you describe your current vantage point on legal software and AI, and how closely are you watching what's happening with products like Legora and Harvey?
For the last decade, I've been deeply embedded in the legal tech ecosystem, both from a product development and vendor evaluation standpoint. In legal AI specifically, I've been directly involved as a decision maker in evaluating and adopting multiple solutions, including platforms like Harvey, as well as newer entrants such as Legora, Luminance, Libra, Lexroom, and others. My perspective is hands-on—not just high-level strategy, but post-implementation, integration into workflows, and measuring business impact.
What I've seen is a market evolving very quickly from AI as a tool to AI as a workflow orchestrator. Tools like Harvey have been strong in legal reasoning and drafting, while platforms like Legora are trying to differentiate more on structured workflows and usability across broader legal teams. Luminance, on the other hand, has historically been very strong on document review and due diligence, with a more deterministic AI approach.
In my role, I evaluate these tools across a few key dimensions: accuracy and hallucination control, especially in high-risk legal contexts; integration into existing legal workflows and the broader ecosystem; scalability across teams, from individual lawyers to enterprise deployments; data governance and compliance, which is absolutely critical in Europe given GDPR; and return on investment and monetization impact, which remains a big question mark around AI—meaning how these tools actually drive productivity gains or new revenue opportunities.
I've also overseen real deployments where AI capabilities have driven measurable impact: improved drafting efficiency, better knowledge retrieval, and increased adoption of premium features when AI is embedded correctly.
When you look at AI in legal software right now, do you frame it as a fundamental shift in what enterprise legal software needs to be, or more as a new layer being added on top of existing systems and workflows?
It's both directions. In some cases, we embed AI features into existing legal practice management software. In other cases, we create an entirely new UI where the lawyer interacts through a natural language prompt—receiving information, sending actions to the e-court, or communicating with end customers.
Where does it become a true architectural shift rather than just an embedded feature inside existing practice or matter systems?
The shift isn't complete yet. We still have two directions: the legacy UI with a traditional software presentation, and the new prompt-based UI where users interact through natural language. The use cases are largely the same across both—legal drafting for contracts and memos, legal research and harmonization, client advisory support, knowledge management, content creation, due diligence support, contract review and anomaly detection, and automation of repetitive tasks. The use cases are consistent regardless of which architectural direction you take.
When a large legal organization decides whether to use a product in real production—not just a pilot—what matters most? Things like uptime, security, data handling, integration, service quality, and overall platform trust.
We are living in an era of "quick and dirty." Operational sustainability, scalability, and cybersecurity are becoming less prioritized, because the imperative is to get innovation to the end customer as fast as possible. I've spent more than thirty years in IT and software and grew up with the mindset that software needs to be robust, scalable, and secure. With this new revolution, the priority has flipped—software needs to quickly address customer needs through innovation.
When that "quick and dirty" mindset meets a large law firm or enterprise legal team, where does it break first in practice? Is it usually security review, integration burden, output reliability, procurement, or something else?
Security still matters—a data breach can destroy a company, especially in Europe given the scale of GDPR fines. But sometimes you genuinely don't have time to go deeper on those aspects, and the pressure to please the market and move fast is real.
What are the non-negotiables that still have to be true before a legal AI product can be trusted with sensitive legal work in Europe?
Accuracy. Setting the right guardrails to ensure answers have the right quality and reliability is one of the most critical requirements.
How do you evaluate accuracy in practice for legal AI? What do you actually look for around hallucination rates, citation quality, consistency, and whether the system can be trusted across real legal workflows?
We rely on subject matter experts within the organization who create golden answers and build the right learning path. The machine learning work is strong in this area—the golden answers set the right guardrails to avoid hallucination as much as possible. We work to lower the temperature of the model's answers, and then we run cycles where we analyze answer quality, accuracy levels, hallucination rates, and temperature settings. It's a continuous improvement effort, and it falls entirely to domain experts—not developers or engineers, but people with deep knowledge of the legal domain. That is the only way to train these kinds of systems, and it represents about seventy percent of the total effort. It is the only real path to reducing hallucination.
How important is integration with the systems lawyers already use—document management, matter management, practice management—in determining whether a legal AI tool becomes part of daily work or stays a peripheral assistant?
The main driver is the legal industry's appetite to serve their end customers in the shortest time frame possible. There is a huge demand to do more with less—less human effort. We see two main directions: legal practice management and legal information services. In legal practice management, the most relevant use cases include legal drafting for contracts, contract analysis and anomaly detection, client advisory support, and due diligence for M&A. In legal information services, it's about intelligent retrieval of information—accurate answers and accurate elaboration for the lawyers searching for it.
If a legal AI tool is going to be used every day, what does it need to plug into? And if those connections are poor, what problems show up first?
Integration with the ecosystem is the most important point. The fewer data silos, the more productivity you can extract from AI. Those silos live inside back-office systems—document management, CRM, ERP, contract management. An agentic AI that can drive information and read data across these different silos is where the real value comes from. And there's another dimension: the agentic AI should also be able to access customer-provided data—documents uploaded by the end customer. A lawyer's agentic AI that can pull together information from multiple data silos, including documents uploaded by the client, becomes far more efficient.
When a legal AI product tries to sell into large European legal organizations, what are the main friction points that slow adoption down? Is it mostly GDPR and data residency, or more about procurement culture, security review, and trust in the vendor?
It's a mix. Strict privacy law and GDPR compliance matter, and data residency is also important in Europe. When you think about the public cloud—Google, Azure, AWS—these are all North American companies. When a requirement states that data must remain in the region, it becomes difficult to embrace those models. Accuracy can be another significant friction point—you have to be able to demonstrate that your software is producing reliable, legally sound outputs.
For European buyers, what matters most on data residency? Is it only that the data is stored in Europe, or also that the data is processed in Europe, the AI runs in Europe, and only European staff can access it?
There are two directions. One is sovereign—where the data simply resides in your region. In my experience, the requirement goes further than that: not just data residency, but the processing must also remain in the region. It's not only about where the data is stored, but also about where it is processed.
When those buyers do a security review of an AI vendor, what does that process usually look like in practice, and where do younger AI companies most often fail it?
Libra, for example, is not leveraging public cloud—they use a private cloud hosted in a Deutsche Telekom data center. Their value proposition is that data is stored and processed in Germany. That's a different situation from Legora, Harvey, and Luminance, which all have different arrangements in place. Lextron, an Italian company growing quickly, still has a heavy dependency on public cloud.
How do you see Legora and Harvey specifically on the data residency and trust dimension? Do they look structurally credible for European enterprise legal environments, or do they still create trust friction?
Legora is a small company, very focused on workflow-centric legal AI. It's a platform that prioritizes team productivity and usability, and it has a strong UX—genuinely easy to use. The workflow structure is better than Harvey's in my view. Collaboration features are strong, and the ease of adoption across teams is a real strength. Legora is also very strong in end-to-end contract workflows—review, sign-off, the entire cycle—and in team collaboration around contract workflows, document management, and case management. It's also strong in reusable knowledge management.
The weaknesses are that Legora offers less in terms of legal reasoning compared to some competitors. It fits smaller ecosystems well, but when you have a complex enterprise environment—a large corporation with existing ERP and CRM systems—the integration is not fully there. It's still evolving on the enterprise side.
On Harvey, how do you see the contrast? Where does Harvey look stronger than Legora, and where does it create more friction for a large European buyer?
Harvey is very much an American company, and their value proposition is strongly centered on AI as a copilot for legal reasoning, drafting, and research—built on top-tier LLMs. In terms of reasoning, they are the best in class. Drafting quality is strong.
Where does Harvey look weaker operationally for Europe?
Harvey is expensive at scale. There is less structured workflow than competitors, which is really key in the legal industry. There is also a higher risk of hallucination, which requires governance controls. Harvey does have a rapid innovation pace, given that it's an LLM-native solution, and it is gaining strong adoption among top law firms, particularly in the US—and that will become a factor in European law firm decisions as well.
To summarize: Harvey is the best in class for intelligence and has the strongest solution for lawyer productivity overall, despite the deployment challenges and friction it creates in Europe. Legora is the best in class for workflow UX and is the strongest for team adoption at scale.
If you put them side by side for a large European legal organization, what would push a buyer toward Legora versus Harvey—factoring in workflow fit, integration needs, data and compliance concerns, and how easy each is to trust in production?
On regulatory and data privacy, Legora has lower friction because it has a European-native mindset and stronger alignment with European data expectations. Harvey carries a US-based perception—both in its architecture and in how European lawyers perceive it—and there are genuine concerns around data residency and compliance. There is also a strong bias toward trust and local fit with law firms: Legora aligns well with civil law systems, while Harvey is US-centric and still needs localization for European jurisdictions.
On integration and change management, the two are roughly at the same level. Lawyer resistance to workflow change and skepticism about AI risk are shared challenges for both platforms—that resistance is present across the industry regardless of the vendor.
On the business model, monetization from AI adoption is a question mark for both. Law firms face the challenge of increasing costs from AI adoption while needing to charge clients more—and that question hasn't been answered clearly by either vendor.
On hallucination, Legora carries slightly lower risk because of its more structured workflow. Harvey is more LLM-heavy and requires more machine learning tuning time. On procurement, Legora is a younger company and simpler to buy; Harvey is more established, can be expensive, and requires heavier governance. Overall, Legora is much better positioned to gain market share in Europe.
On the operational side specifically—data handling, security posture, deployment model, and service quality—where do you see the biggest real difference between them today?
The main difference is that Legora, being a smaller company, is able to provide clear insight into how their architecture is designed and where the data resides. You get a clearer picture of the system's structure and better observability when running the software in production. Harvey is quite opaque by comparison—it's unclear where the data stays, what the deployment model looks like, and whether fault tolerance is properly in place. That opacity can be a significant adoption barrier, especially for enterprise IT departments that require architectural transparency before approving a vendor.
Do you think products like Legora and Harvey are building toward true platform positions, or are they still point solutions that could get absorbed into broader legal software systems of record?
Both are evolving toward platform solutions. They are trying to stay independent because they want to maintain their identity and their own market share in the legal market.
What would Legora or Harvey need to become really hard to remove inside a large European legal organization—the kind of thing that makes a legal AI product core infrastructure rather than a useful tool that could be swapped out later?
The more they know the customer—their habits, culture, sentiment, and day-to-day working patterns—the harder they become to replace. They need to develop intimacy with the end user and the end customer.
How much of that defensibility depends on integration with the core legal stack—document management, matter management, contract systems, knowledge bases—versus the AI experience itself? If the AI doesn't connect deeply into those systems, can it still become core infrastructure, or does it stay a smart assistant on the edge?
AI by design—LLMs, generative AI, agentic AI—learns and adapts continuously. That creates the right level of intimacy over time. Both paths are possible: a product can be a smart assistant or copilot, or it can evolve into agentic AI that takes decisions and actions with increasing autonomy in day-to-day work. We can shift from copilot to self-assisted, and from generative AI to agentic AI. That is what we are going to witness more and more in this field.
What has to be true before a large European legal organization will trust that shift from copilot to agent? Is the main blocker accuracy, governance, auditability, liability, or something else?
This is continuously evolving. My view is that we are moving toward a space where you will have a pool of agents—a mix of AI agents and human agents working together. It may also include situations where two different agentic AI systems exchange information and team up with each other. That is something we are going to see more and more of.
What has to mature first—is it mainly auditability and human oversight, or is the harder problem proving reliable performance and clear accountability when an agent takes action?
Agentic AI needs to become more and more accountable.
Disclaimers
This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

