INF573 Course Project: Unified Multi-class Object Detection, Lane line Regression and Drivable Area Detection Team Member
- Demo Video
- Getting Started
- Trainin and Evaluate
- 3D map generation Demo
- Autoware Integration (New !!)
UNIFIED OBJECT DETECTION, LANE REGRESSION AND DRIVABLE AREA SEGMENTATION
INF573-15s.mp4
Our work is developed from HybridNets (https://github.com/NicolasHHH/Unified_Drivable_Segmentation.git)
The project was developed with Python>=3.7 and Pytorch>=1.10.
git clone https://github.com/NicolasHHH/Unified_Drivable_Segmentation.git
cd Unified_Drivable_Segmentation
pip install -r requirements.txt
# Download weights (cars only)
curl --create-dirs -L -o weights/hybridnets.pth https://github.com/datvuthanh/HybridNets/releases/download/v1.0/hybridnets.pth
# Image inference
python hybridnets_test.py -w weights/hybridnets.pth --source demo/image --output demo_result
# Video inference
python hybridnets_test_videos.py -w weights/hybridnets.pth --source demo/video --output demo_result
# Result is saved in a new folder called demo_result
Download weights from google drive: https://drive.google.com/drive/folders/1kA16TJUVpswy6cb7EUVqN58J8ubLcytv?usp=sharing
Put them under ./weights/
# Image inference
python hybridnets_test.py -w weights/xxx.pth --project bdd100k_person_car --source demo/image --output demo_result
# pictures of size 1280*720 are recommended
# Video inference
python hybridnets_test_videos.py -w weights/xxx.pth --project bdd100k_person_car --source demo/video --output demo_result
Update your dataset paths in projects/your_project_name.yml
.
For BDD100K: imgs, det_annot, da_seg_annot, ll_seg_annot
For kitti Odometry, a tiny portion of data (10 frames) is provided in the ./sample folder.
1) Edit or create a new project configuration, using bdd100k.yml as a template. Augmentation params are here.
python train.py -p bdd100k # config filename
-c 3 # coefficient of effnet backbone
-n 4 # num_workers
-b 6 # batch_size < 12G
-w path/to/weight # use 'last' to resume training from previous session
--freeze_det # freeze detection head, others: --freeze_backbone, --freeze_seg
--lr 1e-5 # learning rate
--num_epochs 200
Please check python train.py --help
for cheat codes.
python val.py -w checkpoints/weight.pth
**Problem shooting: Validation process got killed! **
- Train on a high-RAM instance (RAM as in main memory, not VRAM in GPU). For your reference, we can only val the combined
car
class with 64GB RAM. - Train with
python train.py --cal_map False
to not calculate metrics when validating.
- Convert the point cloud in
.bin
to.pcd
usingkitti_bin_pcd.ipynb
- Colorize point clouds using
pcd_rgb.ipynb
- Visualize the result using the last bloc of pcd_rgb.ipynb`
ros melodic + ubuntu 18.04 + cuda 11.3
目前已知的支持同时import rospy 和 torch的办法.
首先去官网下载安装文件,在命令行中安装
sh Anaconda3-2022.10-Linux-x86_64.sh
# 根据提示完成安装
在~/.bashrc
中注释默认开启Anaconda 环境,并且将ros
默认的python2
路径添加到文件
# 添加
export PYTHONPATH=$PYTHONPATH:/opt/ros/melodic/lib/python2.7/dist-packages
# 整段注释掉
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
#__conda_setup="$('/home/hty/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
#if [ $? -eq 0 ]; then
# eval "$__conda_setup"
#else
# if [ -f "/home/hty/anaconda3/etc/profile.d/conda.sh" ]; then
# . "/home/hty/anaconda3/etc/profile.d/conda.sh"
# else
# export PATH="/home/hty/anaconda3/bin:$PATH"
# fi
#fi
#unset __conda_setup
# <<< conda initialize <<<
这样确保了默认的python路径为ros的python。
从 terminal进入anaconda prompt,~/anaconda3/bin/activate
是第一步骤安装时设置的默认路径。
xxx $: source ~/anaconda3/bin/activate
# 进入成功效果如下
(base) xxx $:
# 推出
conda deactivate
创建独立的虚拟环境
(base) xxx $: conda create -n rostorch python=3.9
安装pytorch 具体在官方archive中找对应的版本
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
通过pip安装ros依赖和其他模型相关库
pip install netifaces rospkg
测试
conda activate rostorch # 进入虚拟环境
python # 打开python 版本应该为3.9.x
>>> import torch
>>> import rospy
>>> torch.cuda.is_available()
True
至此环境配置完成
source ~/anaconda3/bin/activate
conda activate rostorch
# 已经开启roscore,且已下载hybridnets.pth 见上文“1. Default: car only ..."
python hybridnets_ros.py
Download weights from google drive: https://drive.google.com/drive/folders/1kA16TJUVpswy6cb7EUVqN58J8ubLcytv?usp=sharing
Put them under ./weights/
# 已经开启roscore
python hybridnets_ros.py -w weights/xxx.pth --project bdd100k_person_car