Code and model for "Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink" (CVPR 2021)
Natural phenomena may play the role of adversarial attackers, e.g., a blinding glare results in a fatal crash of a Tesla self-driving car.
What if a beam of light can adversarially attack a DNN? Further, how about using a beam of light, specifically the laser beam, as the weapon to perform attacks.
In this work, we show a simple and cool attack by simply using a laser beam.
To this end, we propose a novel attack method called Adversarial Laser Beam (AdvLB), which enables manipulation of laser beam's physical parameters to perform adversarial attack.
- CUDA VERSION 10.2
conda env create -f environment.yaml
conda activate advlb_env
git clone https://github.com/RjDuan/AdvLB/
cd AdvLB-main
python test.py --model resnet50 --dataset your_dataset
Besides revealing the potential threats of AdvLB, in this work, we also analyze the reason of error caused by AdvLB and try to suggest an effective defense for laser beam attack.
Similar to adversarial training, we progressively improve the robustness by injecting the laser beam as perturbations into the data for training. The details about training can be referred to the paper.Models | Std. Acc. rate(%) | Attack Succ. rate(%) |
---|---|---|
ResNet50(org) | 78.19 | 95.10 |
ResNet50(adv trained) | 78.40 | 77.20 |
The weights of "adv trained" ResNet50 model can be downloaded here.
python test.py --model df_resnet50 --dataset your_dataset
The dataset we used in the paper can be downloaded here
Questions are welcome via ranjieduan@gmail.com
- The defense part is completed by Xiaofeng Mao.
@inproceedings{duan2021adversarial,
title={Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink},
author={Duan, Ranjie and Mao, Xiaofeng and Qin, A Kai and Chen, Yuefeng and Ye, Shaokai and He, Yuan and Yang, Yun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16062--16071},
year={2021}
}