Skip to content

RGB-D Scene Reconstruction & Segmentation with Neural Network trained on synthetic database

Notifications You must be signed in to change notification settings

LouiseMassager/PandaPush_Depth_Reconstruction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PandaPush_Depth_Reconstruction 💻 🤖 📹 🦾 📦

Project on the reconstruction of a scene observed by a realsense Depth camera in a PyBullet simulation through a supervised segmentation. The latter is based on a Mask-RCNN neural network (Machine Learning) trained on a synthetic database (generated for this project). This project has been realised for the aim of trajectory planning of a Panda robot for non-prehensile manipulation (push tasks) with those 3D objects.

This project was applied to detect unicolored cuboids and cylinders but was programmed in order to be easily adapted to any types of objects. Indeed, the program can work to detect other objects through the launch of the synthetic database generator with new information (geometrical propreties or simplified CAD models) followed by the launch of the training program. The main program must not be adapted to be used on other types of objects but can be improved by implementing new functionnalities. Another camera can also be used by adapting the camera class.

Deployment

🎬 To see a video presenting how to deploy the different programs : CLICK HERE 🎬

  1. Download the neural network weights, stored in Google drive due to its size (250Mo > GitHub's limit of 100Mo):
cd PandaPush_Depth_Reconstruction/model_free_detection
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1HYM2qZfUeh4nNsfIYNdz92wCEfzZJ6Xp' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1HYM2qZfUeh4nNsfIYNdz92wCEfzZJ6Xp" -O mask_rcnn_cubecyl2.h5 && rm -rf /tmp/cookies.txt
  1. [-] Either record a livestream:
cd PandaPush_Depth_Reconstruction/model_free_detection
python3 main.py

[-] Or run the program on a recording stored in a file "filename.bag", put it in the folder "model_free_detection/data/recordings" and then run the following commands:

cd PandaPush_Depth_Reconstruction/model_free_detection
python3 main.py filename.bag

Methodology

On Run-time steps

They corresponds to the main program that is the only file to be launched in order to run the current program as it so (therefore to detect cuboids and cylinders). For simplicity of use, a majority of the code is stored in the main.py file:

📹 Data Aquisition - aquire and save RGBD data (.ply and .png) from the camera

🧮 Treatment - isolate objects from background, segment them, aquire their pose and convert them from point-clouds (.ply) to meshes (.stl)

🖥️ Simulation - implement the Panda robot and the objects in a Pybullet simulation

🦾 Robot Control - control of the mechanical arm to interact with the objects

Pre-design steps

In order to adapt the project to detect other objects or improve performances, the segmentation model can be modified through changes in the synthetic database and training. Otherwise, if the project want to be used as it is, those files are not needed as the model as already be trained and integrated in the main program (on run-time steps).

🗄️ Synthetic Database Generation - generation of random cuboids/cylinders and generation of random scenes in PyBullet

🌱 Neural Network Training - training of the neural network with the synthetic database generated

Functionnalities

  • Synthetic database generation
  • Mask-rcnn training
  • Data acquisition (recording, colored ply, RGB-D png)
  • Segmentation (top-view)
  • Segmentation (random-view)
  • Meshing (3D models)
  • PyBullet Simulation Initialition
  • CAD Probabilistic Update (Gaussian Processes)
  • Scene update
  • Robot control in the simulation
  • Trajectory planning
  • Physicial robot control

Results

🏆 To see a video presenting the results : CLICK HERE 🏆

Pre-design (Synthetic Database and Neural Network Training)

Experimental Data

Updates

2022/11/20 - meshconverter.py : mesh convertisser from 3D model (.stl) to point cloud (.ply) and vis versa
2022/11/20 - PybulletSimulation.py : implementation of the 3D objects in Pybullet simulation

topic update - use of realsense camera instead of ZED2

2020/11/26 - Code : record and playback videos of realsense camera (.bag)
2020/11/26 - no_tensorflow tensorflow : example code to use realsense camera with and without tensorflow
2020/11/27 - Code : save under .ply and .png (color and depth)
2020/11/29 - Code : save pointclouds (.ply) with color from recordings (.bag)

2020/12/03 - treatment.py : Object separation (with pattern, no machine learning) from a top view based on colored pointcloud
2020/12/05 - edgedetection.py : Edge detection and shape reconstruction by extrusion

2020/12/10 - framechange.py edgedetection.py : Improvement of the edge detection by change of frame of the pointcloud (centering and align face's normal to z axis)
2020/12/11 - plytomesh.py : Automatical conversion from pointcloud to mesh with open3D
2022/12/11 - PybulletSimulation.py : Implementation of the mesh in a Pybullet simulation

2023/02/11 - Synthetic data : pointcloud generation of random cubes (with color and depth noise) + stable positioning + points viewed by camera selection + labelisation
2023/02/16 - main.py : code cleaning (assembly of code in functions and to launch everything with a single launch)
2023/02/18 - main.py : backface and colorisation in Pybullet simulation solving

2023/02/19 - main.py : camera positionning and rgb-depth-segmentation images (.png) saving from Pybullet

2023/02/23 - ML_training_PPDR.ipynb : test on Collaboratory of sample codes of existing mask rcnn tf2
2023/02/23 - Synthetic data : extraction of label and depth+rgb information (.png) from Pybullet simulation during the generation of random scene with random cubes

2023/03/03 - ML_training_PPDR.ipynb : adaptation of the mask rcnn sample code to fit generated database : features in .png format and labels in .txt format
2023/03/21 - Training in colaboratory for detection of cubes and cylinders shapes based on depth or rgb images
2023/03/25 - Installation of tensorflow 2.5 on the Jetson kit pack and launch of the code for training data generation (alternative to Colaboratory)
2023/04/7 - Synthetic data : Adaptation of the synthetic database and scene generation to be more similar to experimental data (aligned cubes/cylinders in top view with random orientation)

2023/04/9 - main.py : Generation of image (depth or rgb) from pointcloud (.ply to .png) for application of the mask-rcnn in the main code

2023/04/10 - main.py : Addition of the controlled Panda robot in the PyBullet simulation


Packages

General

  • Python 3.8.10
  • ros noetic
  • matplotlib

Pointcloud

  • Open3D 0.16.0
  • pymeshlab

Computer Vision (CV)

  • opencv (cv2) 4.4.0
  • pyrealsense2 2.53.1
  • Pillow (PIL) 22.3.1

Machine Learning

  • scikit-learn 1.2.1
  • tensorflow 2.5.0
  • keras 2.5.0

Simulation

  • pybullet 3.2.5
  • (ffmpeg : for recording)




External Resources