Skip to content

Latest commit

ย 

History

History

tutorials

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Logo_ER10

Tutorials

This folder contains DIANNA tutorial notebooks. To install the dependencies for the tutorials, run (in the main dianna folder)

pip install .[notebooks]

๐Ÿ Š For general demonstration of DIANNA click on the logo Logo_ER10 or run it in Colab: Open In Colab.

๐Ÿ Š For tutorials on how to convert an Keras, PyTorch, Scikit-learn or Tensorflow model to ONNX, please see the conversion tutorials.

๐Ÿ Š For specific XAI methods (explainers):

  • Click on the explainer names to watch explanatory videos for the respective method.
  • Click on the logos for direct access to a tutorial notebook. Run the tutorials directly in Google Colab by clicking on the Colab buttons.

Datasets and Tasks

Illustrative (Simple)
Data modality Dataset Task Logo
Images Binary MNIST Binary digit classification mnist_zero_and_one_half_size
Simple Geometric (circles and triangles) Binary shape classificaiton SimpleGeometric Logo
Imagenet $1000$ classes natural images classificaiton ImageNet_autocrop
Text Stanford sentiment treebank Positive or negative movie reviews sentiment classification nlp-logo_half_size
Timeseries Coffee dataset Binary classificaiton of Robusta and Aribica coffee beans Coffe Logo
Weather dataset Binary classification (warm/cold season) of temperature time-series Weather Logo
Tabular Penguin dataset $3$ penguin spicies (Adele, Chinstrap, Gentoo) classificaiton Penguin Logo
Weather dataset Next day sunshine hours prediction (regression) Weather Logo
Scientific use-cases
Data modality Dataset Task Logo
Images Simple Scientific (LeafSnap30) $30$ tree species leaves classification LeafSnap30 Logo
Text EU-law statements Regulatory or non-regulatory classification nlp-logo_half_size
Timeseries Fast Radio Burst (FRB) dataset (not publicly available) Binary classificaiton of Fast Radio Burst (FRB) timeseries data : noise or a real FRB. FRB logo
Tabular Land atmosphere dataset Prediction of "latent heat flux" (regression). The random forest model is used as an emulator to replace the physical model STEMMUS_SCOPE to predict global maps of latent heat flux. Atmosphere Logo

Models

The ONNX models used in the tutorials are available at dianna/models, or linked from their respective tutorial notebooks.

Summary of all Tutorials

All tutorials can be accessed by clicking on the dataset & task logo in the tables below.

The explainers' output for the models trained on the datasets & tasks which are included in the dashboard are marked with Streamlit Logo.

Illustrative (Simple)
Modality \ Method RISE LIME KernelSHAP
Images mnist_zero_and_one_half_size or Open In Colab Streamlit Logo Streamlit Logo mnist_zero_and_one_half_size or Open In Colab Streamlit Logo
ImageNet_autocrop or Open In Colab SimpleGeometric Logo or Open In Colab
Text nlp-logo_half_size or Open In Colab Streamlit Logo nlp-logo_half_size or Open In Colab Streamlit Logo
Time series Weather Logo or Open In Colab Streamlit Logo Weather Logo or Open In Colab Streamlit Logo
Coffee Logo or Open In Colab
Tabular Penguin Logo or Open In Colab Streamlit Logo Penguin Logo or Open In Colab Streamlit Logo Penguin Logo or Open In Colab Streamlit Logo
Streamlit Logo Weather Logo or Open In Colab Streamlit Logo Weather Logo or Open In Colab Streamlit Logo

To learn more about how we aproach the masking for time-series data, please read our Masking time-series for XAI blog-post.

Scientific use-cases
Modality \ Method RISE LIME KernelSHAP
Images LeafSnap30 Logo or Open In Colab
Text nlp-logo_half_size or Open In Colab Streamlit Logo
Time series FRB logo or Open In Colab Streamlit Logo
Tabular Atmosphere Logo or Open In Colab

IMPORTANT: Hyperparameters

Settings per explainer

The XAI methods (explainers) are sensitive to the choice of their hyperparameters! In this master Thesis, this sensitivity is researched and useful conclusions are drawn. The default hyperparameters used in DIANNA for each explainer as well as the choices for some tutorials and their data modality (i - images, txt - text, ts - time series and tab - tabular) are given in the tables below. Also the main conclusions (๐Ÿ Š) from the thesis (on images and text) about the hyperparameters effect are listed.

RISE

Hyperparameter Default value ImageNet_autocrop (i) mnist_zero_and_one_half_size(i) nlp-logo_half_size (txt) Weather Logo (ts) FRB logo (ts)
$n_{masks}$ $1000$ default $5000$ default $10000$ $5000$
$p_{keep}$ optimized (i, txt), $0.5$ (ts) $0.1$ $0.1$ default $0.1$ $0.1$
$n_{features}$ $8$ $6$ default default default $16$

๐Ÿ Š The most crucial parameter is $p_{keep}$. Lower values of $p_{keep}$ lead to more sentitive explanations (observed for both images and text). Easier classificication tasks usually require a lower $p_keep$ as this will cause more perturbation in the input and therefore a more distinct signal in the model predictions.

๐Ÿ Š The feature resolution $n_{features}$ exhibited an optimum at a value of $6$. Higher values can offer a finer grained result but require (far) more $n_masks$. This is also dependent on the scale of the phenomena in the input data that we want to take into account in the explanation.

๐Ÿ Š Larger $n_masks$ will return more consistent results at the cost of computation time. If 2 identical runs yield (very) different results, these will likely contain a lot of (or even mostly) noise and a higher value for $n_masks$ should be used instead.

LIME

Hyperparameter Default value LeafSnap30 Logo (i) Weather Logo (ts) Coffe Logo(ts) nlp-logo_half_size
$n_{samples}$ $5000$ $1000$ $10 000$ $500$ 2000
Kernel Width $25$ default default default default
$n_{features}$ $10$ $30$ default default 999

๐Ÿ Š The most crucial parameter is the Kernel width: low values cause high sensitivity, however that observation was dependent on the evaluation metric.

KernelSHAP

Hyperparameter Default value mnist_zero_and_one_half_size (i) SimpleGeometric Logo (i) Atmosphere Logo (tab)
$n_{samples}$ auto/int $1000$ $2000$ $136588$
$n_{segments}$ $100$ $200$ $200$ default
$sigma$ $0$ default default default

๐Ÿ Š The most crucial parameter is the nubmer of super-pixels $n_{segments}$. Higher values led to higher sensitivity, however that observaiton was dependant on the evaluaiton metric.

๐Ÿ Š Regularization had only a marginal detrimental effect, the best results were obtained using no regularization (no smoothing, $sigma = 0$) or least squares regression.