In-house legal teams spend hundreds of hours a year reviewing contracts; for enterprise teams, that could balloon to thousands of hours of manual work. Savvy legal teams realize that generative AI could create real efficiencies for their teams by automating slow, laborious contract review processes, and as a result in-house legal teams’ use of AI has doubled in the past year. The AI-powered contract analysis tools market is now worth $4.3 billion and is projected to reach $12.06 billion by 2030.
The category of AI-powered contract analysis is now so crowded It’s become difficult for many legal professionals to compare the performance of one tool with another. But they need to be able to figure out the right tool for their needs. There are three critical questions lawyers want answered:
Ivo conducted a research project to compare its purpose-built legal AI contract review solution against general AI review and human attorney review. We asked three attorneys with recent experience either working with an Am Law 100 firm or serving as in-house counsel at technology companies to score redlines conducted by Ivo, Claude for Word, and a practising Special Counsel at an Am Law 25 firm. The redlines were created on 19 real, anonymized contracts that spanned a number of common enterprise agreements like NDAs, MSAs, and DPAs. These agreements were reviewed independently by the judging team; all identifying information was removed from each output and scored blind by the three judges.
Ivo’s performance was nearly indistinguishable from an accomplished human lawyer and significantly outperformed Claude on the five judging criteria—Issue Spotting, Surgical Redlining, Formatting Retention, Comments, and Judgment.
Human: 4.56
Ivo: 4.52
Claude: 3.50
Ivo outperformed Claude on every judging criteria; the largest score difference was in Surgical Redlining and Judgment. In addition, Ivo also excelled at redlining more complex agreements, either containing multiple parties or with complicated commercial transactions.
The test also determined that Ivo’s output is broadly comparable to a senior practicing attorney at a highly regarded law firm. The human attorney and Ivo had very similar scores, suggesting Ivo’s output could be compared to that of a high-performing senior lawyer. However, the attorney completed their redlining tasks in 10 hours, whereas Ivo took, on average, 2 minutes and 45 seconds to review a contract.
For real-world contract review specialist legal AI tools outperform general AI tools by a significant margin. Ivo’s team has spent a great deal of time perfecting its surgical redlining abilities, meaning it redlines documents with precision and accuracy, exactly as a lawyer would. In addition, Ivo scored highest—even higher than a human attorney—in Judgment, the category that assessed whether the outcome selected was the right one for the client.
Our goal with this study is to help answer a key question legal teams are asking themselves; and are specialized legal tools worth the investment? For contract review tasks, specialist tools provide better performance and ultimately a better result for the business.
Schedule a demo today.
