This is the official source code for [LINe: Out-of-Distribution Detection by Leveraging Important Neurons] IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
Last update: 23/06/13
We updated README.md to adjust usage informations.
Please download ImageNet-1k and place the training data and validation data in
./datasets/ILSVRC-2012/train
and ./datasets/ILSVRC-2012/val
, respectively.
We have curated 4 OOD datasets from iNaturalist, SUN, Places, and Textures, and de-duplicated concepts overlapped with ImageNet-1k.
For iNaturalist, SUN, and Places, we have sampled 10,000 images from the selected concepts for each dataset, which can be download via the following links:
wget http://pages.cs.wisc.edu/~huangrui/imagenet_ood_dataset/iNaturalist.tar.gz
wget http://pages.cs.wisc.edu/~huangrui/imagenet_ood_dataset/SUN.tar.gz
wget http://pages.cs.wisc.edu/~huangrui/imagenet_ood_dataset/Places.tar.gz
For Textures, we use the entire dataset, which can be downloaded from their original website.
Please put all downloaded OOD datasets into ./datasets/
.
The downloading process will start immediately upon running.
We provide links and instructions to download each dataset:
- SVHN: download it and place it in the folder of
./datasets/ood_datasets/svhn
. Then runpython select_svhn_data.py
to generate test subset. - Textures: download it and place it in the folder of
./datasets/ood_datasets/dtd
. - Places365: download it and place it in the folder of
./datasets/ood_datasets/places365/test_subset
. We randomly sample 10,000 images from the original test dataset. - LSUN-C: download it and place it in the folder of
./datasets/ood_datasets/LSUN
. - LSUN-R: download it and place it in the folder of
./datasets/ood_datasets/LSUN_resize
. - iSUN: download it and place it in the folder of
./datasets/ood_datasets/iSUN
.
For example, run the following commands in the root directory to download LSUN-C:
cd datasets/ood_datasets
wget https://www.dropbox.com/s/fhtsw1m3qxlwj6h/LSUN.tar.gz
tar -xvzf LSUN.tar.gz
For CIFAR, the model we used in the paper is already in the checkpoints folder.
For ImageNet, the model we used in the paper is the pre-trained ResNet-50 provided by Pytorch. The download process will start upon running.
It is tested under Ubuntu Linux 20.04 and Python 3.8 environment, and requries some packages to be installed:
LINe need precomputing for calculate Shapley value approximation.
Run ./precompute.py
.
Run ./demo-imagenet.sh
.
Run ./demo-cifar.sh
.
This codebase is from DICE (Sun et al. ECCV2022) https://github.com/deeplearning-wisc/dice