Jurisprudence is an open-source project that automates the collection and distribution of French legal decisions. It leverages the Judilibre API provided by the Cour de Cassation to:
- Fetch rulings from major French courts (Cour de Cassation, Cour d'Appel, Tribunal Judiciaire)
- Process and convert the data into easily accessible formats
- Publish & version updated datasets on Hugging Face every few days.
It aims to democratize access to legal information, enabling researchers, legal professionals and the public to easily access and analyze French court decisions. Whether you're conducting legal research, developing AI models, or simply interested in French jurisprudence, this project might provide a valuable, open resource for exploring the French legal landscape.
Jurisdiction | Jurisprudences | Oldest | Latest | Tokens | JSONL (gzipped) | Parquet |
---|---|---|---|---|---|---|
Cour d'Appel | 398,207 | 1996-03-25 | 2024-10-29 | 1,989,416,125 | Download (1.74 GB) | Download (2.91 GB) |
Tribunal Judiciaire | 86,266 | 2023-12-14 | 2024-10-29 | 304,283,113 | Download (275.60 MB) | Download (456.91 MB) |
Cour de Cassation | 537,471 | 1860-08-01 | 2024-10-25 | 1,107,915,336 | Download (932.26 MB) | Download (1.58 GB) |
Total | 1,021,944 | 1860-08-01 | 2024-10-29 | 3,401,614,574 | 2.92 GB | 4.93 GB |
Latest update date: 2024-11-04
# Tokens are computed using GPT-4 tiktoken and the text
column.
The up-to-date jurisprudences dataset is available at: https://huggingface.co/datasets/antoinejeannot/jurisprudence in JSONL (gzipped) and parquet formats.
This allows you to easily fetch, query, process and index all jurisprudences in the blink of an eye!
# pip install datasets
import datasets
dataset = load_dataset("antoinejeannot/jurisprudence")
dataset.shape
>> {'tribunal_judiciaire': (58986, 33),
'cour_d_appel': (378392, 33),
'cour_de_cassation': (534258, 33)}
# alternatively, you can load each jurisdiction separately
cour_d_appel = load_dataset("antoinejeannot/jurisprudence", "cour_d_appel")
tribunal_judiciaire = load_dataset("antoinejeannot/jurisprudence", "tribunal_judiciaire")
cour_de_cassation = load_dataset("antoinejeannot/jurisprudence", "cour_de_cassation")
Leveraging datasets allows you to easily ingest data to PyTorch, Tensorflow, Jax etc.
For analysis, using polars, pandas or duckdb is quite common and also possible:
url = "https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_de_cassation.parquet" # or tribunal_judiciaire.parquet, cour_d_appel.parquet
# pip install polars
import polars as pl
df = pl.scan_parquet(url)
# pip install pandas
import pandas as pd
df = pd.read_parquet(url)
# pip install duckdb
import duckdb
table = duckdb.read_parquet(url)
If you use this code in your research, please use the following BibTeX entry:
@misc{antoinejeannot2024,
author = {Jeannot Antoine and {Cour de Cassation}},
title = {Jurisprudence},
year = {2024},
howpublished = {\url{https://github.com/antoinejeannot/jurisprudence}},
note = {Data source: API Judilibre, \url{https://www.data.gouv.fr/en/datasets/api-judilibre/}}
}
This project relies on the Judilibre API par la Cour de Cassation, which is made available under the Open License 2.0 (Licence Ouverte 2.0)
It scans the API every 3 days at midnight UTC and exports its data in various formats to Hugging Face, without any fundamental transformation but conversions.