Harvey.ai Enhances AI Evaluation with BigLaw Bench: Arena



Luisa Crawford
Nov 07, 2025 12:03

Harvey.ai introduces BigLaw Bench: Arena, a new AI evaluation framework for legal tasks, offering insights into AI system performance through expert pairwise comparisons.



Harvey.ai Enhances AI Evaluation with BigLaw Bench: Arena

Harvey.ai has unveiled a novel AI evaluation framework named BigLaw Bench: Arena (BLB: Arena), designed to assess the effectiveness of AI systems in handling legal tasks. According to Harvey.ai, this approach allows for a comprehensive comparison of AI models, giving legal experts the opportunity to express their preferences through pairwise comparisons.

Innovative Evaluation Process

BLB: Arena operates by having legal professionals review outputs from different AI models on various legal tasks. Lawyers select their preferred outputs and provide explanations for their choices, enabling a nuanced understanding of each model’s strengths. This process allows for a more flexible evaluation compared to traditional benchmarks, focusing on the resonance of each system with experienced lawyers.

Monthly Competitions

On a monthly basis, major AI systems at Harvey compete against foundation models, internal prototypes, and even human performance across numerous legal tasks. This rigorous testing involves hundreds of legal tasks, and the outcomes are reviewed by multiple lawyers to ensure diverse perspectives. The extensive data collected through these evaluations are used to generate Elo scores, which quantify the relative performance of each system.

Qualitative Insights and Preference Drivers

Beyond quantitative scores, BLB: Arena collects qualitative feedback, providing insights into the reasons behind preferences. Feedback is categorized into preference drivers such as Alignment, Trust, Presentation, and Intelligence. This categorization helps transform unstructured feedback into actionable data, allowing Harvey.ai to improve its AI models based on specific user preferences.

Example Outcomes and System Improvements

In recent evaluations, the Harvey Assistant, built on GPT-5, demonstrated significant performance improvements, outscoring other models and confirming its readiness for production use. The preference driver data indicated that intelligence was a key factor in human preference, highlighting the system’s ability to handle complex legal problems effectively.

Strategic Use of BLB: Arena

The insights gained from BLB: Arena are crucial for Harvey.ai’s decision-making process regarding the selection and enhancement of AI systems. By considering lawyers’ preferences, the framework helps identify the most effective foundation models, contributing to the development of superior AI solutions for legal professionals.

Image source: Shutterstock


Source: https://blockchain.news/news/harvey-ai-enhances-ai-evaluation-biglaw-bench-arena