Skip to content

3. Simulation of robot using ROS and Gazebo

Hamsadatta edited this page Mar 8, 2020 · 1 revision

Recap

Our first part covered basic understanding of Robot Operating System (ROS), architecture, features and integration with external libraries. Part two went through understanding the Gazebo simulator, URDF, creating a robot model, adding sensors with Gazebo plugins, and lastly the teleoperation of the robot. Basic knowledge of ROS and Gazebo simulator is required to proceed with this article. In this article, we are going to simulate our robot by providing capabilities like mapping, and localization. Before we move ahead, understanding the concept of SLAM is important.

What is SLAM?

SLAM stands for Simultaneous localization and mapping. It is the name given to the computational problem statement associated with robot autonomy or navigation. It is not a piece of code or software, rather it is a group of algorithms trying to solve the problem. The problem statement of the SLAM is to estimate the location of the robot or an entity in a map, given series of sensor observations and controls. The algorithms utilized are based on the mathematical approximations. Some examples are particle filter, extended Kalman filter, graphSLAM, etc.

How SLAM works?

To understand the working of SLAM let us consider a simple example. Consider we are moving from point A to point B. To successfully reach point B we must know the directions. In order to get the directions, the human brain processes the landmarks (buildings, a pole, etc) that are captured by our eyes previously and respond back by means of legs to navigate. Similarly, in SLAM we use sensors for observations, algorithms to process, and controls for actuation.

SLAM Requirements

Now let us come back to our robot. To incorporate the capabilities like mapping and localization, we need to use any one of the SLAM algorithms. In order to use them, we need to fulfill its requirements which are given below

  • The first requirement of any SLAM algorithm is to have a range sensing device. Range sensors help the robot to sense the world around it. The most commonly used range sensor for SLAM is LIDAR (light detection and ranging) sensor. They are a bit expensive, but reliable compared to low-cost range sensors. An alternative to LIDAR is, to use depth cameras and convert the image data to a laser scan matcher.

  • The second requirement is to have enough landmarks. Moreover, the landmarks should be distinguishable and unique based on different scenarios.

  • The last requirement is to have an actuation system that can control the robot.

Examples in SLAM

In order to deploy SLAM algorithms ROS provides many packages, few popular packages are given below

Gmapping: This is an open-source 2D-mapping package suitable for planar environments. Gmapping extensively relies on laser data and odometry. Gmapping is based on Rao-Backwellized particle filter algorithm

Hector Mapping: This is an open-source 2D-mapping package suitable for structured and unstructured environments. Hector map rely only on lased data and do not depend on odometry, thus odometry errors are avoided. The Hector map is based on the extended Kalman filter algorithm.

RTAB-Map: Stands for Real-Time Appearance-Based Mapping. It is an open-source RGB-D, Stereo, and Lidar Graph-Based SLAM approach based on an incremental appearance-based loop closure detector. RTAB-MAP provides various configurations in mapping using LIDAR, Depth camera, IMU, and odometry. It is suitable for structured and unstructured environments.

AMCL: Stands for adaptive Monte Carlo localization. It is an open-source probabilistic localization system for a robot moving in 2D environments. Internally it works as follows, given a map, by using a particle filter it can track the pose of a robot.

We can utilize the above-mentioned packages for our robot. In this article, we will use RTAB-MAP for mapping and AMCL for localization.

Mapping

If you recall, in our previous article we have added the camera and hokuyo laser to our robot. RTAB-MAP utilizes both depth and laser data coming from camera and LIDAR respectively, so that is the reason why we have opted for RTAB-MAP in this article. All we need to do is, install the RTAB-MAP package and use the example launch file. For the data to be published, the topics should be mapped correctly.

Our first goal is to map the given world show in the figure. The robot is present beside the white box. We should manually move the robot with the help keyboard teleoperation package provided by ROS in order to completely map the surroundings.

Localization

Once the mapping is completed, the next step is to localize the robot. Initially, the robot is not localized. In order to localize we need to provide a 2D-pose estimate in Rviz or mobilize the robot with one complete rotation. After successful localization, the laser data is correctly aligned with the map. We need to provide the 2D-nav goal in Rviz for navigation to move the robot toward the desired location.

Results and discussion

The results obtained are illustrated below. The results show a 3D map and the robot trying to reach its desired goal. The map created is satisfactory with the number of details preserved. The second figure illustrates the coordinate transform frames for the robot with respect to the global map frame. For more details regarding the code and the workflow please refer to the source link given below

Repository link: https://github.com/KPIT-OpenSource/KBot

What’s Next?

In our next article, we will be discussing regarding self-driving cars and try to build a prototype of a real autonomous bot that can navigate in an indoor environment. The readers are advised to try out the other SLAM algorithms like Hector and Gmapping in order to toughly understand the process.

Clone this wiki locally