This repository contains a Robot Operating System (ROS) perception pipeline implementation for identifying and classifying objects from a noisy tabletop environment, using point-cloud data from an RGB-D sensor. This is used for picking, sorting and relocating of the objects into bins using a PR2 robot. This project uses ROS with Python for project 3 of the Udacity - Robotics NanoDegree Program
For Detailed Explaination of the Entire Perception pipeline Implementation go to this Document
-
Ubuntu OS, as at the time of this writing, ROS only works on Ubuntu.
-
Python 2. Installation instructions can be found here.
-
Robot Operating System (ROS) Kinetic. Installation instructions can be found here.
This project uses two external repositories. One for training the object classification model and a second for implementing the PR2 tabletop environment for pick and place.
This step can be skipped and is not required if the pre-trained model.sav
is used. Otherwise, Download and setup the Udacity Perception Exercises repository. If ROS is installed, follow the setup instructions outlined in the repositories README.
Download and setup the Udacity Perception Project repository. If ROS is installed, follow the setup instructions outlined in the repositories README.
Clone this repository
$ git clone https://github.com/vi-ku/Perception-Project.git
If you wish to train the model and have followed the steps in the Training repository, Copy the files in the /sensor_stick
folder into the ~/catkin_ws/src/sensor_stick
folder.
$ cd <this cloned repository path>/Perception-Project
$ cp -R sensor_stick/scripts/* ~/catkin/src/sensor_stick/scripts
$ cp sensor_stick/src/sensor_stick/features.py ~/catkin/src/sensor_stick/src/sensor_stick
Below is the Confusion Matrix of the trained model with noramlised features.
If you wish to use the trained model and have followed the steps in the Test Environment repository, copy the files model.sav
and perception.py
in the ~/catkin_ws/src/Perception-Project/pr2_robot/scripts
folder.
$ cd <this cloned repository path>/Perception-Project
$ cp pr2_robot/scripts/model.sav pr2_robot/scripts/perception.py ~/catkin_ws/src/Perception-Project/pr2_robot/scripts
Now install missing dependencies using rosdep install:
$ cd ~/catkin_ws
$ rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
Build the project:
$ cd ~/catkin_ws
$ catkin_make
Add following to your .bashrc file
export GAZEBO_MODEL_PATH=~/catkin_ws/src/Perception-Project/pr2_robot/models:$GAZEBO_MODEL_PATH
If you haven’t already, following line can be added to your .bashrc to auto-source all new terminals
source ~/catkin_ws/devel/setup.bash
In a terminal window, type the following,
$ cd ~/catkin_ws
$ roslaunch sensor_stick training.launch
You should arrive at a result similar to the below.
In a new terminal, run the capture_features.py script to capture and save features for each of the objects in the environment.
$ cd ~/catkin_ws
$ rosrun sensor_stick capture_features.py
When it finishes running, in the /catkin_ws
you should have a training_set.sav
file containing the features and labels for the dataset. Copy this file to the ~/catkin_ws/src/Perception-Project/pr2_robot/scripts
folder and rename to model.sav
.
$ cd ~/catkin_ws
$ cp training_set.sav ~/catkin_ws/src/Perception-Project/pr2_robot/scripts/model.sav
In a terminal window, type the following,
$ cd ~/catkin_ws
$ roslaunch pr2_robot pick_place_project.launch
You should arrive at a result similar to the below.
Once Gazebo and RViz are up and running, In a new terminal window type,
$ cd ~/catkin_ws/src/Perception-Project/pr2_robot/scripts
$ rosrun pr2_robot perception.py
You should arrive at a result similar to the below.
Proceed through the project by pressing the ‘Next’ button on the RViz window when a prompt appears in your active terminal.
The project ends when the robot has successfully picked and placed all objects into respective dropboxes (though sometimes the robot gets excited and throws objects across the room!)
Given a cluttered tabletop scenario, the perception pipeline will identify target objects from a so-called “Pick-List” in a particular order, pick up those objects and place them into their corresponding dropbox.
- Fork it!
- Create your feature branch:
git checkout -b my-new-feature
- Commit your changes:
git commit -am 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request.
This project is licensed under the MIT License - see the LICENSE.md file for details.