PyTorch training code and models reimplentation for object detection as described in Liu et al. (2015), SSD: Single Shot MultiBox Detector. Currently work in process, very pleasure for suggestion and cooperation.
- Support exporting to
TorchScript
model. Jul. 22, 2020. - Support exporting to
onnx
, and doing inference usingonnxruntime
. Jul. 25, 2020. - Support doing inference using
libtorch
cpp interface. Sep. 18, 2020. - Add more fetures ...
There are no extra compiled components in DEMONET and package dependencies are minimal, so the code is very simple to use. We provide instructions how to install dependencies via conda. First, clone the repository locally:
git clone https://github.com/zhiqwang/demonet.git
Then, install PyTorch 1.6+ and torchvision 0.7+:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
Install pycocotools (for evaluation on COCO) and scipy (for training):
conda install cython scipy
pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
That's it, should be good to train and evaluate detection models.
Support trainint with COCO and PASCAL VOC format (chosen with the parameter --dataset-file [coco/voc]
). With COCO format we expect the directory structure to be the following:
.
└── path/to/data-path/
├── annotations # annotation json files
└── images # root path of images
When you are using PASCAL VOC format, we expect the directory structure to be the following:
.
└── path/to/data-path/
└── VOCdevkit
├── VOC2007
└── VOC2012
CUDA_VISIBLE_DEVICES=5,6 python -m torch.distributed.launch --nproc_per_node=2 --use_env train.py --data-path 'data-bin/mscoco/coco2017/' --dataset coco --model ssdlite320_mobilenet_v3_large --pretrained --test-only
- This repo borrows the architecture design and part of the code from DETR and torchvision.
- The implementation of
ssd_lite_mobilenet_v2
borrow the code from qfgaohao's pytorch-ssd and lufficc's SSD.