At Legalweek this week, the annual gathering of legal professionals and technology vendors, a palpable tension hung in the air. While the event showcased the latest advancements in artificial intelligence for the legal industry, a recurring question echoed through the halls and networking events: How do we actually get lawyers to use these tools? The conference, held at New York City’s Javits Center, served as a stark reminder that the promise of AI-driven efficiency in law firms faces significant hurdles, despite billions of dollars flowing into legal tech startups and mounting pressure from clients and investors.
The Growing Pressure: Billions at Stake in Legal AI Adoption
The legal industry, traditionally known for its cautious and deliberate approach, is facing a rapidly accelerating shift. Companies across various sectors, including tech giants like Atlassian and Block, have recently implemented layoffs citing AI-driven efficiency gains. This trend signals a broader economic reality: AI’s potential to boost productivity can also lead to workforce reductions. The legal sector, with its significant overhead and billable hour structure, is not immune to this pressure. Billions of dollars are now invested in the expectation that AI will transform legal workflows, but the reality on the ground is more complex.
Legalweek: A Progress Report on AI Integration in Law Firms
Legalweek has become an annual benchmark for assessing the progress of generative artificial intelligence within the legal industry. This year’s event featured noticeably more polished demonstrations of AI capabilities, with vendors showcasing “AI agents” designed to automate tasks previously handled by junior associates, such as contract drafting, review, and complex workflow management. However, beneath the surface of slick demos and optimistic pitches, a quieter signal emerged: the adoption rate isn't keeping pace with the hype. During a Microsoft presentation, Steven Abrahams, who leads product work on Copilot integrations, observed a surprisingly low level of engagement when he asked attendees how many used software to automate contract review, a prime use case for large language models.
Client Expectations and the Risk of Falling Behind
The message from industry leaders was clear: inaction carries significant consequences. “Revenue is at risk,” warned Emma Dowden, Burges Salmon's chief operating officer, during a panel hosted by Harvey, a prominent legal AI startup valued at $8 billion. Robert Clark, an in-house lawyer at the brand agency Dentsu, added a pointed observation: firms that tout innovation while resisting AI adoption risk being perceived as disingenuous. He stated it was “cringeworthy” to hear firms appoint a new chief innovation officer while simultaneously hesitating to invest in AI platforms.
The Root of the Resistance: Fear, Training, and Career Concerns
Lawyers attending Legalweek offered various explanations for the slow adoption rate. Dowden attributed the hesitation to fear – fear of job displacement, the potential erosion of billable hours, and a lack of confidence in their ability to effectively utilize and defend AI tools to clients. She suggested that partners might be open to AI’s benefits, but only if other practice areas within the firm are willing to test the waters first. Sarah Eagen, who leads learning and development at the megafirm Cleary Gottlieb, noted that even younger lawyers, often perceived as early adopters, can be resistant, as AI threatens the traditional career path built on entry-level work.
The Training Gap: A Critical Barrier to AI Adoption
The lack of adequate training emerged as a central obstacle. Ian Nelson, who runs Hotshot, a company specializing in legal tech training programs, highlighted the common mistake of waiting to provide training until after a firm has already purchased AI software. He argued that this is short-sighted, as some lawyers will inevitably experiment with chatbot tools regardless, potentially leading to misuse and security risks. Furthermore, existing training programs often focus too narrowly on specific tools, neglecting the broader context of AI risks and a firm’s internal policies.
The Malpractice Question: Is Resistance to AI a Legal Risk?
As AI capabilities continue to advance, a more provocative question began to surface at Legalweek: at what point does resistance to adopting AI tools constitute professional malpractice? Michael Pierson, a corporate lawyer and co-founder of Pierson Ferdinand, a firm that heavily utilizes AI, raised this issue during the Harvey panel. Pierson Ferdinand operates without any associates, relying instead on AI-powered tools to deliver legal services. He questioned whether failing to leverage AI in the daily delivery of legal services could be considered a breach of duty to clients.
Key Takeaways
- Lawyers are hesitant to adopt AI tools due to concerns about job security, the impact on billable hours, and a lack of understanding of the technology.
- Inadequate training is a significant barrier to AI adoption, with many firms delaying training until after purchasing software.
- The legal industry faces increasing pressure from clients and investors to embrace AI, and resistance could potentially be viewed as professional malpractice.
- Startups like Harvey and Legora are driving innovation in legal AI, but their success hinges on widespread adoption by law firms.
Frequently Asked Questions
Frequently Asked Questions
- Why are lawyers hesitant to adopt AI?
- Lawyers express concerns about job displacement, the potential impact on their billable hours, and a lack of confidence in their ability to effectively use and defend AI tools to clients. There's also a general cautiousness inherent in the legal profession.
- What role does training play in AI adoption within law firms?
- Adequate training is crucial. Many firms delay training until after purchasing AI software, which is a mistake. Effective training should cover not only tool-specific functionalities but also the broader context of AI risks and a firm’s internal policies.
- Could resisting AI adoption be considered malpractice?
- This is an emerging question. As AI capabilities advance, failing to leverage these tools to provide efficient and cost-effective legal services could potentially be viewed as a breach of duty to clients, though this is not yet a settled legal principle.



