This is the implementation of "Beating Backdoor Attack at Its Own Game" (ICCV-23). [arXiv]
The defense framework injects a non-adversarial backdoor to suppress the effectiveness of backdoor attack.
pip install -r requirements.txt
Run the following command for a quick demonstration.
bash quick_demo.sh badnets
We provide demonstrations for "badnets" and "blend" attack. The script generates a poisoned dataset saved under datasets/cifar10/
, and train a model with NAB on it. Detected samples and pseudo labels can be found in isolation/
and pseudo_label/
, respectively.
All datasets should be organized as a dictionary saved under ./CIFAR10/${attack}/
:
{"data": FloatTensor, "labels": LongTensor, "true_labels": LongTensor, "backdoor": BoolTensor, "target": int}
You can obtain a formatted CIFAR-10 dataset with scripts/create_cifar10.sh
and poison it with scripts/poison.py
:
bash scripts/create_cifar10.sh
python scripts/poison.py \
--data cifar10 --attack badnets \
--ratio 0.1 --target 0
We provide the implementation of LGA here:
python backdoor_detection_lga.py --attack badnets10
The results are stored under isolation/
. You can also replace LGA with other methods:
- SPECTRE
- Label-Noise Learning (DBD)
- Any other detection method that can isolate a small set of suspected samples.
We provide the implementation of VD:
python scripts/create_clean_lite.py
python pseudo_label_vd.py --attack badnets10
If you also experiment with a defense method using self-supervised learning like DBD, we recommend Nearest-Center (NC) in our paper for higher pseudo label quality.
NAB is a data preprocessing framework. To avoid extra storage overhead, we provide a on-the-fly implemetation where detected samples are processed during each training update.
python train_nab.py \
--attack badnets10 \
--isolation ${detection_results} \
--pseudo-label ${pseudo_labels}
You can augment NAB with a simple test data filtering technique:
python evaluate_filter.py \
--attack badnets10 --checkpoint ${checkpoint}
NAB with LGA and NC under BadNets attack.
Please consider citing our paper if your find our research or this codebase helpful:
@inproceedings{liu2023beating,
title={Beating Backdoor Attack at Its Own Game},
author={Liu, Min and Sangiovanni-Vincentelli, Alberto and Yue, Xiangyu},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={4620--4629},
year={2023}
}