Skip to content

Unofficial PyTorch implementation of the paper: "CenterNet3D: An Anchor free Object Detector for Autonomous Driving"

License

Notifications You must be signed in to change notification settings

maudzung/CenterNet3D-PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CenterNet3D-PyTorch

python-image pytorch-image

The PyTorch Implementation of the paper: CenterNet3D: An Anchor free Object Detector for Autonomous Driving


Features

2. Getting Started

2.1. Requirement

pip install -U -r requirements.txt
  • For mayavi library, please refer to the installation instructions from its official website.

  • To build the CenterNet3D model, I have used the spconv library. Please follow the instruction from the repo to install the library. I also wrote notes for the installation here

2.2. Data Preparation

Download the 3D KITTI detection dataset from here.

The downloaded data includes:

  • Velodyne point clouds (29 GB)
  • Training labels of object data set (5 MB)
  • Camera calibration matrices of object data set (16 MB)
  • Left color images of object data set (12 GB)

Please make sure that you construct the source code & dataset directories structure as below.

2.3. CenterNet3D architecture

architecture

2.4. How to run

2.4.1. Visualize the dataset

cd src/data_process
  • To visualize 3D point clouds with 3D boxes, let's execute:
python kitti_dataset.py

An example of the KITTI dataset:

example

2.4.2. Inference

python test.py --gpu_idx 0 --peak_thresh 0.2

2.4.4. Training

2.4.4.1. Single machine, single gpu
python train.py --gpu_idx 0 --batch_size <N> --num_workers <N>...
2.4.4.2. Multi-processing Distributed Data Parallel Training

We should always use the nccl backend for multi-processing distributed training since it currently provides the best distributed training performance.

  • Single machine (node), multiple GPUs
python train.py --dist-url 'tcp://127.0.0.1:29500' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0
  • Two machines (two nodes), multiple GPUs

First machine

python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0

Second machine

python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 1

To reproduce the results, you can run the bash shell script

./train.sh

Tensorboard

  • To track the training progress, go to the logs/ folder and
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./

Contact

If you think this work is useful, please give me a star!
If you find any errors or have any suggestions, please contact me (Email: nguyenmaudung93.kstn@gmail.com).
Thank you!

Citation

@article{CenterNet3D,
  author = {Guojun Wang, Bin Tian, Yunfeng Ai, Tong Xu, Long Chen, Dongpu Cao},
  title = {CenterNet3D: An Anchor free Object Detector for Autonomous Driving},
  year = {2020},
  journal = {arXiv},
}
@misc{CenterNet3D-PyTorch,
  author =       {Nguyen Mau Dung},
  title =        {{CenterNet3D-PyTorch: PyTorch Implementation of the CenterNet3D paper}},
  howpublished = {\url{https://github.com/maudzung/CenterNet3D-PyTorch}},
  year =         {2020}
}

References

[1] CenterNet: Objects as Points paper, PyTorch Implementation [2] VoxelNet: PyTorch Implementation

Folder structure

${ROOT}
└── checkpoints/    
    ├── centernet3d.pth
└── dataset/    
    └── kitti/
        ├──ImageSets/
        │   ├── test.txt
        │   ├── train.txt
        │   └── val.txt
        ├── training/
        │   ├── image_2/ (left color camera)
        │   ├── calib/
        │   ├── label_2/
        │   └── velodyne/
        └── testing/  
        │   ├── image_2/ (left color camera)
        │   ├── calib/
        │   └── velodyne/
        └── classes_names.txt
└── src/
    ├── config/
    │   ├── train_config.py
    │   └── kitti_config.py
    ├── data_process/
    │   ├── kitti_dataloader.py
    │   ├── kitti_dataset.py
    │   └── kitti_data_utils.py
    ├── models/
    │   ├── centernet3d.py
    │   ├── deform_conv_v2.py
    │   └── model_utils.py
    └── utils/
    │   ├── evaluation_utils.py
    │   ├── logger.py
    │   ├── misc.py
    │   ├── torch_utils.py
    │   └── train_utils.py
    ├── evaluate.py
    ├── test.py
    ├── train.py
    └── train.sh
├── README.md 
└── requirements.txt