- Team Name: ClarifyAI ( Team Id: Qual-230517)
- Rank 4
Check out summary of our work in this report
- Python >= 3.8.5
- PyTorch >= 1.11
- CUDA >= 11.3
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/shoryasethia/RobustSIRR.git
cd RobustSIRR
# create new anaconda env
conda create -n sirr python=3.8 -y
conda activate sirr
# install python dependencies by pip
pip install -r requirements.txt
🌟 Download the pre-trained RobustSIRR models from [Pre-trained_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie] to the checkpoints
folder.
- 7,643 cropped images with size 224 × 224 from Pascal VOC dataset (image ids are provided in VOC2012_224_train_png.txt, you should crop the center region with size 224 x 224 to reproduce our result )
- 90(89) real-world training images from Berkeley real dataset
❗ Place the processed VOC2012 and real datasets in the datasets
folder, and name them VOC2012
and real89
respectively.
🌟For convenience, you can directly download the prepared training dataset from [ VOC2012_For_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie] and [ real89_For_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie].
- 20 real testing images from Berkeley real dataset
- Three sub-datasets, namely ‘Objects’, ‘Postcard’, ‘Wild’ from SIR2 dataset
- 20 testing images from Nature
❗ Place the processed datasets in the datasets
folder, and name them as real20
, SIR2
, and nature20
respectively.
🌟For convenience, you can directly download the prepared testing dataset from [ TestingDataset_For_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie].
The hierarchical structure of all datasets is illustrated in the following diagram.
datasets
├── nature20
│ ├── blended
│ └── transmission_layer
├── real20
│ ├── blended
│ ├── real_test.txt
│ └── transmission_layer
├── real89
│ ├── blended
│ └── transmission_layer
├── SIR2
│ ├── PostcardDataset
│ │ ├── blended
│ │ ├── reflection
│ │ └── transmission_layer
│ ├── SolidObjectDataset
│ │ ├── blended
│ │ ├── reflection
│ │ └── transmission_layer
│ └── WildSceneDataset
│ ├── blended
│ ├── reflection
│ └── transmission_layer
└── VOC2012
├── blended
├── JPEGImages
├── reflection_layer
├── reflection_mask_layer
├── transmission_layer
└── VOC_results_list.json
Note:
transmission_layer
is GT,blended
is Input, andreflection/reflection_layer
is the reflection part- For the SIR^2 dataset, we only standardize the folder structure
- For adv. training:
# To Be Released
- For clean images training:
# ours_cvpr
CUDA_VISIBLE_DEVICES=0 python train.py --name ours --gpu_id 0 --no-verbose --display_id -1 --batchSize 4
# ours_wo_aid
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aid --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aid
# ours_wo_aff
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aff --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aff
# ours_wo_scm
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_scm --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_scm
Note:
- Check
options/robustsirr/train_options.py
to see more training options.
CUDA_VISIBLE_DEVICES=0 python test.py --name ours_cvpr --hyper --gpu_ids 0 -r --no-verbose --save_gt --save_attack --save_results
# To Be Released
# Due to confidentiality concerns. Alternatively, you can refer to https://github.com/yuyi-sd/Robust_Rain_Removal
☝️ Comparison of the PSNR values with respect to perturbation levels
☝️ Comparison of different training strategies on three benchmark datasets. ‘w/’ and ‘w/o adv.’ mean training with or without adversarial images. MSE and LPIPS denote corresponding attacks over Full regions. ↓ and ↑ represent the degradation and improvement performance compared to the original prediction inputting clean images.