Skip to content

🔥3D点云目标检测&语义分割(深度学习)-SOTA方法,代码,论文,数据集等

License

Notifications You must be signed in to change notification settings

HuangCongQing/3D-Point-Clouds

Repository files navigation

3D-Point-Clouds

3D点云SOTA方法,代码,论文,数据集(点云目标检测&分割)

应同学建议,创建了星球 【自动驾驶感知(PCL/ROS+DL)】 专注于自动驾驶感知领域,包括传统方法(PCL点云库,ROS)和深度学习(目标检测+语义分割)方法。同时涉及Apollo,Autoware(基于ros2),BEV感知,三维重建,SLAM(视觉+激光雷达) ,模型压缩(蒸馏+剪枝+量化等),自动驾驶模拟仿真,自动驾驶数据集标注&数据闭环等自动驾驶全栈技术,欢迎扫码二维码加入,一起登顶自动驾驶的高峰!

image

点云处理方法上主要包括两类方法

@双愚 , 若fork或star请注明来源

TODO

目录

0 目标检测框架(pcdet+mmdetection3d+det3d+paddle3d)

【202209done】目标检测框架(pcdet+mmdetection3d+det3d+paddle3d)文章撰写

代码注解笔记:

  1. pcdet:https://github.com/HuangCongQing/pcdet-note
  2. mmdetection3d:https://github.com/HuangCongQing/mmdetection3d-note
  3. det3d: TODO
  4. paddle3d: TODO

1 paper(code)

2 Datasets

自动驾驶相关数据集调研总结【附下载地址】(更新ing)

数据集基本处理: 数据集标注文件处理

部分数据下载脚本:https://github.com/HuangCongQing/download_3D_dataset

3 点云可视化

点云可视化笔记和代码:https://github.com/HuangCongQing/Point-Clouds-Visualization

3D点云可视化的库有很多,你的选择可能是:

  • pcl 点云可视化 [c++]
  • ROS topic可视化 [c++] [python]
  • open3D [python]
  • mayavi[python]
  • matplolib [python]

4 点云数据标注

数据标注工具总结:https://github.com/HuangCongQing/data-labeling-tools

paper(code)

3D_Object_Detection

  • One-stage
  • Two-stage

One-stage

Voxel-Net、SECOND、PointPillars、HVNet、DOPS、Point-GNN、SA-SSD、3D-VID、3DSSD

  • Voxel-Net
  • SECOND
  • PointPillars
  • HVNet
  • DOPS
  • Point-GNN
  • SA-SSD
  • 3D-VID
  • 3DSSD

Two-stage

F-pointNet、F-ConvNet、Point-RCNN、Part-A^2、PV-RCNN、Fast Point RCNN、TANet

  • F-pointNet
  • F-ConvNet
  • Point-RCNN
  • Part-A^2
  • PV-RCNN
  • Fast Point RCNN
  • TANet

3D_Semantic_Segmentation

PointNet is proposed to learn per-point features using shared MLPs and global features using symmetrical pooling functions. Based on PointNet, a series of point-based networks have been proposed

Point-based Methods: these methods can be roughly divided into pointwise MLP methods, point convolution methods, RNN-based methods, and graph-based methods

1 pointwise MLP methods

PointNet++,PointSIFT,PointWeb,ShellNet,RandLA-Net

PointNet++ PointSIFT PointWeb ShellNet RandLA-Net

2 point convolution methods

PointCNN PCCN A-CNN ConvPoint pointconv KPConv DPC InterpCNN

  • PointCNN
  • PCCN
  • A-CNN
  • ConvPoint
  • pointconv
  • KPConv
  • DPC
  • InterpCNN

3 RNN-based methods

G+RCU RSNet 3P-RNN DAR-Net

  • G+RCU
  • RSNet
  • 3P-RNN
  • DAR-Net

4 graph-based methods

DGCNN SPG SSP+SPG PyramNet GACNet PAG HDGCN HPEIN SPH3D-GCN DPAM

  • DGCNN
  • SPG
  • SSP+SPG
  • PyramNet
  • GACNet
  • PAG
  • HDGCN
  • HPEIN
  • SPH3D-GCN
  • DPAM

3D_Instance Segmentation

Datasets

数据集下载

Graviti 收录了 400 多个高质量 CV 类数据集,覆盖无人驾驶、智慧零售、机器人等多种 AI 应用领域。举两个例子: 文章> https://bbs.cvmart.net/topics/3346

Datasets数据集汇总

https://github.com/Yochengliu/awesome-point-cloud-analysis#---datasets

  • [KITTI] The KITTI Vision Benchmark Suite. [det.]**常用
  • [ModelNet] The Princeton ModelNet . [cls.]
  • [ShapeNet] A collaborative dataset between researchers at Princeton, Stanford and TTIC. [seg.]
  • [PartNet] The PartNet dataset provides fine grained part annotation of objects in ShapeNetCore. [seg.]
  • [PartNet] PartNet benchmark from Nanjing University and National University of Defense Technology. [seg.]
  • [S3DIS**] The Stanford Large-Scale 3D Indoor Spaces Dataset. [seg.]**常用
  • [ScanNet] Richly-annotated 3D Reconstructions of Indoor Scenes. [cls. seg.]
  • [Stanford 3D] The Stanford 3D Scanning Repository. [reg.]
  • [UWA Dataset] . [cls. seg. reg.]
  • [Princeton Shape Benchmark] The Princeton Shape Benchmark.
  • [SYDNEY URBAN OBJECTS DATASET] This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees. [cls. match.]
  • [ASL Datasets Repository(ETH)] This site is dedicated to provide datasets for the Robotics community with the aim to facilitate result evaluations and comparisons. [cls. match. reg. det]
  • [Large-Scale Point Cloud Classification Benchmark(ETH)] This benchmark closes the gap and provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total. [cls.]
  • [Robotic 3D Scan Repository] The Canadian Planetary Emulation Terrain 3D Mapping Dataset is a collection of three-dimensional laser scans gathered at two unique planetary analogue rover test facilities in Canada.
  • [Radish] The Robotics Data Set Repository (Radish for short) provides a collection of standard robotics data sets.
  • [IQmulus & TerraMobilita Contest] The database contains 3D MLS data from a dense urban environment in Paris (France), composed of 300 million points. The acquisition was made in January 2013. [cls. seg. det.]
  • [Oakland 3-D Point Cloud Dataset] This repository contains labeled 3-D point cloud laser data collected from a moving platform in a urban environment.
  • [Robotic 3D Scan Repository] This repository provides 3D point clouds from robotic experiments,log files of robot runs and standard 3D data sets for the robotics community.
  • [Ford Campus Vision and Lidar Data Set] The dataset is collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck.
  • [The Stanford Track Collection] This dataset contains about 14,000 labeled tracks of objects as observed in natural street scenes by a Velodyne HDL-64E S2 LIDAR.
  • [PASCAL3D+] Beyond PASCAL: A Benchmark for 3D Object Detection in the Wild. [pos. det.]
  • [3D MNIST] The aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition. [cls.]
  • [WAD] [ApolloScape] The datasets are provided by Baidu Inc. [tra. seg. det.]
  • [nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.用过
  • [PreSIL] Depth information, semantic segmentation (images), point-wise segmentation (point clouds), ground point labels (point clouds), and detailed annotations for all vehicles and people. [paper] [det. aut.]
  • [3D Match] Keypoint Matching Benchmark, Geometric Registration Benchmark, RGB-D Reconstruction Datasets. [reg. rec. oth.]
  • [BLVD] (a) 3D detection, (b) 4D tracking, (c) 5D interactive event recognition and (d) 5D intention prediction. [ICRA 2019 paper] [det. tra. aut. oth.]
  • [PedX] 3D Pose Estimation of Pedestrians, more than 5,000 pairs of high-resolution (12MP) stereo images and LiDAR data along with providing 2D and 3D labels of pedestrians. [ICRA 2019 paper] [pos. aut.]
  • [H3D] Full-surround 3D multi-object detection and tracking dataset. [ICRA 2019 paper] [det. tra. aut.]
  • [Argoverse BY ARGO AI] Two public datasets (3D Tracking and Motion Forecasting) supported by highly detailed maps to test, experiment, and teach self-driving vehicles how to understand the world around them.[CVPR 2019 paper][tra. aut.]
  • [Matterport3D] RGB-D: 10,800 panoramic views from 194,400 RGB-D images. Annotations: surface reconstructions, camera poses, and 2D and 3D semantic segmentations. Keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and scene classification. [3DV 2017 paper] [code] [blog]
  • [SynthCity] SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Nine categories. [seg. aut.]
  • [Lyft Level 5] Include high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map. [det. seg. aut.]
  • [SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper**] [seg. oth. aut.]**常用
  • [NPM3D] The Paris-Lille-3D has been produced by a Mobile Laser System (MLS) in two different cities in France (Paris and Lille). [seg.]
  • [The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions. [det.]
  • [A*3D: An Autonomous Driving Dataset in Challeging Environments] A*3D: An Autonomous Driving Dataset in Challeging Environments. [det.]
  • [PointDA-10 Dataset] Domain Adaptation for point clouds.
  • [Oxford Robotcar] The dataset captures many different combinations of weather, traffic and pedestrians. [cls. det. rec.]

常用分割数据集

  • [S3DIS**] The Stanford Large-Scale 3D Indoor Spaces Dataset. [seg.] [常用]
  • [SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper**] [seg. oth. aut.] [常用]
  • Semantic3d

常用分类数据集

todo

常用目标检测数据集

  • [KITTI] The KITTI Vision Benchmark Suite. [det.]**常用
  • [nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.用过
  • [The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions. [det.]

References

License

Copyright (c) 双愚. All rights reserved.

Licensed under the MIT License.


微信公众号:【双愚】(huang_chongqing) 聊科研技术,谈人生思考,欢迎关注~

image

往期推荐:

  1. 本文不提供职业建议,却能助你一生
  2. 聊聊我们大学生面试
  3. 清华大学刘知远:好的研究方法从哪来

本人创建星球 【自动驾驶感知(PCL/ROS+DL)】 专注于自动驾驶感知领域,包括传统方法(PCL点云库,ROS)和深度学习(目标检测+语义分割)方法。同时涉及Apollo,Autoware(基于ros2),BEV感知,三维重建,SLAM(视觉+激光雷达) ,模型压缩(蒸馏+剪枝+量化等),自动驾驶模拟仿真,自动驾驶数据集标注&数据闭环等自动驾驶全栈技术,欢迎扫码二维码加入,一起登顶自动驾驶的高峰! image