FairVis is a visual analytics system that allows users to audit their classification models for intersectional bias. Users can generate subgroups of their data and investigate if a model is underperforming for certain populations.
- Try a live demo!
- Read the full paper.
- Cite this work and more.
FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning
Ángel Alexander Cabrera, Will Epperson, Fred Hohman, Minsuk Kahng, Jamie Morgenstern, Duen Horng (Polo) Chau
IEEE Conference on Visual Analytics Science and Technology (VAST). 2019.
In this example we show how FairVis can be used on the COMPAS dataset to find significant disparities in false positive rates between African American and Caucasian defendants that are not supported by base rates. The primary components of the system are the following:
A. View distributions of the dataset's features and generate subgroups.
B. Visualize subgroup performance in relation to selected metrics.
C. Compare selected subgroups and view details.
D. Find suggested underperforming subgroups and similar groups.
For more details about the system and its use cases, see the IEEE VAST paper.
Clone the repository:
git clone https://github.com/poloclub/FairVis.git
Then initialize the React project by running
npm install
Run the server with
npm start
-
Run a model on your data and and create a new file with the last two columns being the output class (between 0-1) of the model and the ground truth labels (0 or 1). Note that only binary classification is currently supported. Examples of models in Jupyter Notebook format can be found in
./models
. -
Run the
preprocess.py
script on your classified data, e.g.python3 preprocess.py my-data-with-classes.csv
. Additional options for the helper function can be found usingpython3 preprocess.py -h
. -
Save the processed file to
./src/data/
. -
Import the file in the
src/components/Welcome.js
component. -
Add a new row to the table in
Welcome.js
around line140
in the form of the other datsets.
Name | Affiliation |
---|---|
Ángel Alexander Cabrera | Georgia Tech |
Will Epperson | Georgia Tech |
Fred Hohman | Georgia Tech |
Minsuk Kahng | Georgia Tech |
Jamie Morgenstern | Georgia Tech |
Duen Horng (Polo) Chau | Georgia Tech |
@inproceedings{cabrera2019fairvis,
title={FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning},
author={Cabrera, {'A}ngel Alexander and Epperson, Will and Hohman, Fred and Kahng, Minsuk and Morgenstern, Jamie and Chau, Duen Horng},
booktitle={2019 IEEE Conference on Visual Analytics Science and Technology (VAST)},
pages={46-56},
year={2019},
publisher={IEEE},
doi={10.1109/VAST47406.2019.8986948},
url={https://cabreraalex.com/#/paper/fairvis}
}
MIT License. See LICENSE.md
.