Skip to content

📍PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency

License

Notifications You must be signed in to change notification settings

liuXinGangChina/PIN_SLAM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

📍PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency

Yue Pan · Xingguang Zhong · Louis Wiesmann . Thorbjörn Posewsky . Jens Behley · Cyrill Stachniss

University of Bonn

Preprint | Video

TL;DR: PIN-SLAM is a full-fledged implicit neural LiDAR SLAM system including odometry, loop closure detection, and globally consistent mapping

pin_slam_teaser

Globally consistent point-based implicit neural (PIN) map built with PIN-SLAM in Bonn. The high-fidelity mesh can be reconstructed from the neural point map.


pin_slam_loop_compare

Comparison of (a) the inconsistent mesh with duplicated structures reconstructed by PIN LiDAR odometry, and (b) the globally consistent mesh reconstructed by PIN-SLAM.


Globally Consistent Mapping Various Scenarios RGB-D SLAM Extension
demo_kitti00.mp4
demo_lidar_9scenes.mp4
demo_replica_rgbd.mp4
Table of Contents
  1. Abstract
  2. Installation
  3. How to run PIN-SLAM
  4. Visualizer instructions
  5. Contact
  6. Related projects

Abstract

[Details (click to expand)] Accurate and robust localization and mapping are essential components for most autonomous robots. In this paper, we propose a SLAM system for building globally consistent maps, called PIN-SLAM, that is based on an elastic and compact point-based implicit neural map representation. Taking range measurements as input, our approach alternates between incremental learning of the local implicit signed distance field and the pose estimation given the current local map using a correspondence-free, point-to-implicit model registration. Our implicit map is based on sparse optimizable neural points, which are inherently elastic and deformable with the global pose adjustment when closing a loop. Loops are also detected using the neural point features. Extensive experiments validate that PIN-SLAM is robust to various environments and versatile to different range sensors such as LiDAR and RGB-D cameras. PIN-SLAM achieves pose estimation accuracy better or on par with the state-of-the-art LiDAR odometry or SLAM systems and outperforms the recent neural implicit SLAM approaches while maintaining a more consistent, and highly compact implicit map that can be reconstructed as accurate and complete meshes. Finally, thanks to the voxel hashing for efficient neural points indexing and the fast implicit map-based registration without closest point association, PIN-SLAM can run at the sensor frame rate on a moderate GPU.

Installation

Platform requirement

  • Ubuntu OS (tested on 20.04)

  • With GPU (recommended) or CPU only (run much slower)

  • GPU memory requirement (> 6 GB recommended)

1. Set up conda environment

conda create --name pin python=3.8
conda activate pin

2. Install the key requirement PyTorch

conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia 

The commands depend on your CUDA version. You may check the instructions here.

3. Install other dependency

pip install open3d==0.17 scikit-image gtsam wandb tqdm rich roma natsort pyquaternion pypose evo laspy rospkg 

Note that rospkg is optional. You can install it if you would like to use PIN-SLAM with ROS.

Run PIN-SLAM

Clone the repository

git clone git@github.com:PRBonn/PIN_SLAM.git
cd PIN_SLAM

Sanity test

For a sanity test, do the following to download an example part (first 100 frames) of the KITTI dataset (seq 00):

sh ./scripts/download_kitti_example.sh

And then run:

python pin_slam.py ./config/lidar_slam/run_demo.yaml

Use run_demo_cpu.yaml if you don't have GPU on your computer. Use run_demo_no_vis.yaml if you are running on a server without a X service.

You can visualize the SLAM process in PIN-SLAM visualizer and check the results in the ./experiments folder.

Run on your datasets

For an arbitrary data sequence, you can run:

python pin_slam.py path_to_your_config_file.yaml

Generally speaking, you only need to edit in the config file the pc_path, which is the path to the folder containing the point cloud (.bin, .ply, .pcd or .las format) for each frame. For ROS bag, you can use ./scripts/rosbag2ply.py to extract the point cloud in .ply format.

For pose estimation evaluation, you may also provide the path pose_path to the reference pose file and optionally the path calib_path to the extrinsic calibration file. Note the pose file and calibration file should be in the KITTI odometry data format.

The SLAM results and logs will be output in the output_root folder specified in the config file.

The training logs can be monitored via Weights & Bias online if you turn on the wandb_vis_on option in the config file. If it's your first time using Weights & Bias, you will be requested to register and log in to your wandb account.

ROS 1 Support

If you are not using PIN-SLAM as a part of a ROS package, you can avoid the catkin stuff and simply run:

python pin_slam_ros.py [path_to_your_config_file] [point_cloud_topic_name] [(optional)point_timestamp_field_name]

For example:

python pin_slam_ros.py ./config/lidar_slam/run_ros_general.yaml /os_cloud_node/points time

After playing the ROS bag or launching the sensor you can then visualize the results in Rviz by:

rviz -d ./config/pin_slam_ros.rviz 

You may use the ROS service save_results and save_mesh to save the results and mesh in the output_root folder.

The process will stop and the results and logs will be saved in the output_root folder if no new messages are received for more than 30 seconds.

If you are running without a powerful GPU, PIN-SLAM may not run at the sensor frame rate. You need to play the rosbag with a lower rate to run PIN-SLAM properly.

You can also put pin_slam_ros.py into a ROS package for rosrun or roslaunch.

Inspect the results after SLAM

After the SLAM process, you can reconstruct mesh from the PIN map within an arbitrary bounding box with an arbitrary resolution by running:

python vis_pin_map.py [path/to/your/result/folder] [marching_cubes_resolution_m] [(cropped)_map_file.ply]  [output_mesh_file.ply] [mc_nn_threshold]

The bounding box of (cropped)_map_file.ply will be used for the bounding box for mesh reconstruction. mc_nn_threshold controls the trade-off between completeness and accuracy. The smaller number (for example 8) will lead to a more complete mesh with more guessed artifacts. The larger number (for example 15) will lead to a less complete but more accurate mesh.

For example, for the case of the sanity test, run:

python vis_pin_map.py ./experiments/sanity_test_* 0.2 neural_points.ply  mesh_20cm.ply

Visualizer Instructions

We provide a PIN-SLAM visualizer based on lidar-visualizer to monitor the SLAM process.

The keyboard callbacks are listed below.

[Details (click to expand)]
Button Function
Space pause/resume
ESC/Q exit
G switch between the global/local map visualization
E switch between the ego/map viewpoint
F toggle on/off the current point cloud visualization
M toggle on/off the mesh visualization
A toggle on/off the current frame axis & sensor model visualization
P toggle on/off the neural points map visualization
D toggle on/off the training data pool visualization
I toggle on/off the SDF horizontal slice visualization
T toggle on/off PIN SLAM trajectory visualization
Y toggle on/off the ground truth trajectory visualization
U toggle on/off PIN odometry trajectory visualization
R re-center the view point
Z 3D screenshot, save the currently visualized entities in the log folder
B toggle on/off back face rendering
W toggle on/off mesh wireframe
Ctrl+9 Set mesh color as normal direction
5 switch between point cloud for mapping and for registration (with point-wise weight)
7 switch between black and white background
/ switch among different neural point color mode, 0: geometric feature, 1: color feature, 2: timestamp, 3: stability, 4: random
< decrease mesh nearest neighbor threshold (more complete and more artifacts)
> increase mesh nearest neighbor threshold (less complete but more accurate)
[/] decrease/increase mesh marching cubes voxel size
↑/↓ move up/down the horizontal SDF slice
+/- increase/decrease point size

Contact

If you have any questions, please contact:

Related Projects

SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations

LocNDF: Neural Distance Field Mapping for Robot Localization

KISS-ICP: A LiDAR odometry pipeline that just works

About

📍PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Shell 0.5%