Skip to content

Latest commit

 

History

History
204 lines (166 loc) · 12.6 KB

README.md

File metadata and controls

204 lines (166 loc) · 12.6 KB

UranusDet

License

Abstract

This is a tensorflow-based rotation detection benchmark, also called UranusDet. UranusDet is completed by YangXue.

Papers and codes related to remote sensing/aerial image detection: DOTA-DOAI.

Techniques:

3

Projects

0

Latest Performance

More results and trained models are available in the MODEL_ZOO.md.

DOTA1.0 (Task1)

Model Neck Backbone Training/test dataset mAP Model Link Anchor Angle Pred. Reg. Loss Angle Range Data Augmentation Configs
RetinaNet-H FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 64.17 Baidu Drive (j5l0) H Reg. smooth L1 180 × cfgs_res50_dota_v15.py
RetinaNet-H FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 65.73 Baidu Drive (jum2) H Reg. smooth L1 90 × cfgs_res50_dota_v4.py
RSDet FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 67.27 Baidu Drive (6nt5) H Reg. modulated loss - × cfgs_res50_dota_rsdet_v2.py
CSL FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 67.38 Baidu Drive (g3wt) H Cls.: Gaussian (r=1, w=10) smooth L1 180 x cfgs_res50_dota_v45.py
DCL FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 67.39 Baidu Drive (p9tu) H Cls.: BCL (w=180/256) smooth L1 180 × cfgs_res50_dota_dcl_v5.py
R3Det FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 70.66 Baidu Drive (30lt) H->R Reg. smooth L1 90 × cfgs_res50_dota_r3det_v1.py
R3Det-DCL FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 71.21 Baidu Drive (jueq) H->R Cls.: BCL (w=180/256) iou-smooth L1 90->180 × cfgs_res50_dota_r3det_dcl_v1.py
R2CNN (Faster-RCNN) FPN ResNet50_v1d 600->800 DOTA1.0 trainval/test 72.27 Baidu Drive (wt2b) H->R Reg. smooth L1 90 × cfgs_res50_dota_v1.py

My Development Environment

docker images: docker pull yangxue2docker/yx-tf-det:tensorflow1.13.1-cuda10-gpu-py3

  1. python3.5 (anaconda recommend)
  2. cuda 10.0
  3. opencv(cv2)
  4. tfplot 0.2.0 (optional)
  5. tensorflow-gpu 1.13

Download Model

Pretrain weights

  1. Please download resnet50_v1, resnet101_v1, resnet152_v1, efficientnet, mobilenet_v2, darknet53 (Baidu Drive (1jg2), Google Drive) pre-trained models on Imagenet, put them to $PATH_ROOT/dataloader/pretrained_weights.
  2. (Recommend in this repo) Or you can choose to use better backbones (resnet_v1d), refer to gluon2TF.

Trained weights

  1. Please download trained models by this project, then put them to $PATH_ROOT/output/pretained_weights.

Compile

```  
cd $PATH_ROOT/libs/utils/cython_utils
python setup.py build_ext --inplace (or make)

cd $PATH_ROOT/libs/utils/
python setup.py build_ext --inplace
```

Train

  1. If you want to train your own dataset, please note:

    (1) Select the detector and dataset you want to use, and mark them as #DETECTOR and #DATASET (such as #DETECTOR=retinanet and #DATASET=DOTA)
    (2) Modify parameters (such as CLASS_NUM, DATASET_NAME, VERSION, etc.) in $PATH_ROOT/libs/configs/#DATASET/#DETECTOR/cfgs_xxx.py
    (3) Copy $PATH_ROOT/libs/configs/#DATASET/#DETECTOR/cfgs_xxx.py to $PATH_ROOT/libs/configs/cfgs.py
    (4) Add category information in $PATH_ROOT/libs/label_name_dict/label_dict.py     
    (5) Add data_name to $PATH_ROOT/data/io/read_tfrecord.py  
    
  2. Make tfrecord
    If image is very large (such as DOTA dataset), the image needs to be cropped. Take DOTA dataset as a example:

    cd $PATH_ROOT/dataloader/dataset/DOTA
    python data_crop.py
    

    If image does not need to be cropped, just convert the annotation file into xml format, refer to example.xml.

    cd $PATH_ROOT/dataloader/dataset/  
    python convert_data_to_tfrecord.py --VOC_dir='/PATH/TO/DOTA/' 
                                       --xml_dir='labeltxt'
                                       --image_dir='images'
                                       --save_name='train' 
                                       --img_format='.png' 
                                       --dataset='DOTA'
    
  3. Start training

    cd $PATH_ROOT/tools/#DETECTOR
    python train.py
    

Test

  1. For large-scale image, take DOTA dataset as a example (the output file or visualization is in $PATH_ROOT/tools/#DETECTOR/test_dota/VERSION):

    cd $PATH_ROOT/tools/#DETECTOR
    python test_dota_ms.py --test_dir='/PATH/TO/IMAGES/'  
                           --gpus=0,1,2,3,4,5,6,7  
                           -ms (multi-scale testing, optional)
                           -s (visualization, optional)
    

    Notice: In order to set the breakpoint conveniently, the read and write mode of the file is' a+'. If the model of the same #VERSION needs to be tested again, the original test results need to be deleted.

  2. For small-scale image, take HRSC2016 dataset as a example:

    cd $PATH_ROOT/tools/#DETECTOR
    python test_hrsc2016_ms.py --test_dir='/PATH/TO/IMAGES/'  
                               --gpu=0
                               --image_ext='bmp'
                               --test_annotation_path='/PATH/TO/ANNOTATIONS'
                               -s (visualization, optional)
    

Tensorboard

cd $PATH_ROOT/output/summary
tensorboard --logdir=.

1

2

Citation

If you find our code useful for your research, please consider cite.

@article{yang2020dense,
    title={Dense Label Encoding for Boundary Discontinuity Free Rotation Detection},
    author={Yang, Xue and Hou, Liping and Zhou, Yue and Wang, Wentao and Yan, Junchi},
    journal={arXiv preprint arXiv:2011.09670},
    year={2020}
}

@article{yang2020arbitrary,
    title={Arbitrary-Oriented Object Detection with Circular Smooth Label},
    author={Yang, Xue and Yan, Junchi},
    journal={European Conference on Computer Vision (ECCV)},
    year={2020}
    organization={Springer}
}

@article{yang2019r3det,
    title={R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object},
    author={Yang, Xue and Liu, Qingqing and Yan, Junchi and Li, Ang and Zhang, Zhiqiang and Yu, Gang},
    journal={arXiv preprint arXiv:1908.05612},
    year={2019}
}

@article{qian2019learning,
    title={Learning modulated loss for rotated object detection},
    author={Qian, Wen and Yang, Xue and Peng, Silong and Guo, Yue and Yan, Chijun},
    journal={arXiv preprint arXiv:1911.08299},
    year={2019}
}

@article{yang2020scrdet++,
    title={SCRDet++: Detecting Small, Cluttered and Rotated Objects via Instance-Level Feature Denoising and Rotation Loss Smoothing},
    author={Yang, Xue and Yan, Junchi and Yang, Xiaokang and Tang, Jin and Liao, Wenglong and He, Tao},
    journal={arXiv preprint arXiv:2004.13316},
    year={2020}
}

@inproceedings{yang2019scrdet,
    title={SCRDet: Towards more robust detection for small, cluttered and rotated objects},
    author={Yang, Xue and Yang, Jirui and Yan, Junchi and Zhang, Yue and Zhang, Tengfei and Guo, Zhi and Sun, Xian and Fu, Kun},
    booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
    pages={8232--8241},
    year={2019}
}

Reference

1、https://github.com/endernewton/tf-faster-rcnn
2、https://github.com/zengarden/light_head_rcnn
3、https://github.com/tensorflow/models/tree/master/research/object_detection
4、https://github.com/fizyr/keras-retinanet