A collection of research materials on explainable AI/ML
-
Updated
Oct 29, 2024 - Markdown
A collection of research materials on explainable AI/ML
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
A collection of algorithms of counterfactual explanations.
A list of research papers of explainable machine learning.
Personal coach to help you obtain desired AI decisions!
This is the official repository of the paper "CounterNet: End-to-End Training of Counterfactual Aware Predictions".
This project Implements the paper “Robustness implies Fairness in Casual Algorithmic Recourse” using the R language.
Recourse Explanation Library in JAX
Python implementation of the work "The importance of Time in Causal Algorithmic Recourse"
Code associated with "Recourse For Humans", presented at the Participatory Approaches to Machine Learning workshop at ICML 2020.
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change -- to appear at KDD'25
Banned from a site or organization? Account suspended? Censored? Why?
This is the official repository of the paper "RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model".
Add a description, image, and links to the recourse topic page so that developers can more easily learn about it.
To associate your repository with the recourse topic, visit your repo's landing page and select "manage topics."