Logo LEXam

LEXam: Benchmarking Legal Reasoning on 340 Law Exams

EETH Zurich, ZUniversity of Zurich, LUniversity of Lausanne,
MMax Planck Institute for Research on Collective Goods
CSwiss Federal Supreme Court OOmnilex SUniversity of St. Gallen NNiklaus.ai

Introduction

Long-form legal reasoning remains a key challenge for large language models (LLMs) in spite of recent advances in test-time scaling. We introduce LogoLEXam, a novel benchmark derived from 340 law exams spanning 116 law school courses across a range of subjects and degree levels. The dataset comprises 4,886 law exam questions in English and German, including 2,841 long-form, open-ended questions and 2,045 multiple-choice questions. Besides reference answers, the open questions are also accompanied by explicit guidance outlining the expected legal reasoning approach such as issue spotting, rule recall, or rule application. Our evaluation on both open-ended and multiple-choice questions present significant challenges for current LLMs; in particular, they notably struggle with open questions that require structured, multi-step legal reasoning. Moreover, our results underscore the effectiveness of the dataset in differentiating between models with varying capabilities. Adopting an LLM-as-a-Judge paradigm with rigorous human expert validation, we demonstrate how model-generated reasoning steps can be evaluated consistently and accurately. Our evaluation setup provides a scalable method to assess legal reasoning quality beyond simple accuracy metrics.

Logo LEXam Dataset

Overview

FVEL

Logo LEXam data generation pipeline.

Dataset Statistics

FVEL

The statistics of Logo LEXam dataset.

Dataset Examples

Experiment Results

BibTeX


      @article{fan2025lexam,
        title={LEXam: Benchmarking Legal Reasoning on 340 Law Exams},
        author={Fan, Yu and Ni, Jingwei and Merane, Jakob and Salimbeni, Etienne and Tian, Yang and Hermstrüwer, Yoan and Huang, Yinya and Akhtar, Mubashara and Geering, Florian and Dreyer, Oliver and Brunner, Daniel and Leippold, Markus and Sachan, Mrinmaya and Stremitzer, Alexander and Engel, Christoph and Ash, Elliott and Niklaus, Joel},
        journal={arXiv preprint arXiv:2505.12864},
        year={2025}
      }