Skip to content

Implementation

William Loo edited this page Feb 23, 2021 · 59 revisions

On this Page:

After finalizing the design, we began implementation of our code. Some aspects of the project were well-covered by the design, and some parts turned out to behave a little differently than expected during implementation. We will describe each portion of our implementation in detail, in approximately what sequential order they are used in the end-to-end pipeline.

ROS and Gazebo Environment

We used an Ubuntu 16.04 virtual machine with ROS Kinetic and Gazebo 7. We used a prebuilt 6-DoF KUKA KR5 Gazebo URDF model from this repo. We got our table tennis table CAD model from here and our table tennis paddle CAD model from here. All the robot URDFs are in the kuka_kr5/kr5_description package and our Gazebo model and world files can be found in the kuka_kr5/kuka_kr5_gazebo package.

kr5

Ball Detection

The first component in our feedback pipeline would be the ball perception. With the rest of the components downstream which depend on the reported position of the ball, it was critical, especially in the early stages of the project, to have the ball detector working accurately so any problems can be better diagnosed.

The ping-pong ball is a bright orange object, which the group realized could enable us to use something simple and elegant like HSV filtering to detect the ball. During the design phase, the environment was planned in a way to allow for the highest fidelity results, but an unexpected difficulty that we didn't account for was that our orange-colored KUKA KR5 arm may occasionally interfere with software tuned to detect a similarly orange-colored ball. Similar colors on the KUKA KR5 arm were toned down or replaced as to prevent false detections.

ball

Up close, the ping pong ball is observed to be a few shades of orange.

The input to the ball detector is a set of two images from the top and side cameras. HSV filtering is performed on these images to filter blobs of pixels whose color resembles that of the ping pong ball. The isolated blob's top right and bottom left coordinates are then averaged to determine the approximate center, and the image coordinates are converted into PointStamped rostopic messages using linear interpolation of the camera's image pixels calibrated relative to the camera's pose in the world frame. The tuned HSV range to filter the orange ping pong ball are [0,100,100] - [50,255,255] for the lower bound and upper bound respectively.

The ball detection pipeline is implemented in ball_detection/src/ball_tracker.py.

hsv

The HSV pipeline with the results highlighted by drawing a code-generated circle around the detected ball.

Ball Velocity Filtering

Now that we have an estimate of the ball's pose in world coordinates, we need to estimate the ball's velocity during the trajectory. At the same time, we can filter our position estimate by incorporating the dynamics of the ball falling. We tried using 2 different methods to solve this. These filters are implemented in ball_detection/src/velocity_filter.py. We also needed to use a custom message to pass information about the ball to our topics, and this definition is at ball_detection/msg/PosVelTimed.msg.

Exponential Moving Average

We first implemented an exponential moving average filter for both the predicted position and predicted velocity. An EMA filter does the following where is the estimate of the current value, is the dynamics function for our system, is our new observation, and is a mixing constant. We used standard ballistic equations to implement the dynamics of the ball into our equation, so our estimated next position becomes and our estimated velocity becomes . The mixing constant is a hyperparameter that determines how much to rely on the previous filtered observation versus how much to rely on the dynamics and we chose [.15, .15, .25] for x,y,z directions respectively. This filter worked really well, mostly due to the pose from the ball detection node being very close estimate of the ground truth pose. An example trajectory is shown below.

Kalman Filter

Though our EMA was good, we wanted to try using a Kalman filter as its an industry standard estimation algorithm. It acts very similarly to a EMA filter, except now the mixing constant is dynamically changed as a function of the measure variance of our state variables. Importantly, a Kalman filter assumes that we have a linear state space model, which we do, and that the noise in our sensors are Gaussian white noise, which is a reasonable assumption. We use these equations from a wonderful EECS 126 note.

kalman_equations

We use [position, velocity] as our state vector and find the appropriate dynamics matrices after discretization, which will be the same as the dynamics from the previous section but written in state space form. . Finally we chose our covariance matrices for the noise in and noise in to be and respectively. However, the Kalman filter doesn't perform any better than the EMA filter, and we think it is due to our original pose estimate (in blue below) already being very close to the ground truth (in teal).

trajectory_plot

Ball Trajectory Prediction

The purpose of our trajectory estimation function is to have our software try and estimate where the ping pong ball will meet the xz-plane in front of the robot, like in our original design.

The input of this portion of the software is the estimation of the ball positions and velocities along with time stamps, all from the filtering node above. To predict the trajectory of the ball, we use a simple ballistic model that considered only the gravity and the coefficient of restitution. The spin and the aerodynamics of the ball is not included, as they are difficult to implement in both the simulator and the prediction algorithm.

The model will simulate the trajectory, and the points are then extrapolated to where it meets the xz-plane. The output is the predicted position on the plane, as well as its velocity and the predicted reaching time stamp. This code is located in ball_detection/predict.py.

We used a Jupyter notebook to test all our preliminary methods for the ball tracking pipeline, and also to replay ROS Bag files to generate nice visualizations like below. The notebook is at experiment_data/ball_path_test.ipynb.

pred_traj

Predicted trajectory and end position using filtered results from our vision-based ball detection node.

Arm Controller

Now that we know where the ball is going to be, we need to move the arm there. We use the standard ros_control framework to simulate the hardware interface on the robot. We use the PositionJointInterface along with a PID controller with manual gains to be able to set the position of any joint on the robot. However, controlling in joint space is not very useful so we need a way to convert this to Cartesian space.

To do this, we use the MoveIt framework to easily plan in Cartesian space. MoveIt will combine a inverse kinematics(IK) solver with a motion planning algorithm to create trajectories in Cartesian space and then convert the trajectories to joint space, all while obeying scene constraints. We used the MoveIt Setup Assistant to generate the boilerplate MoveIt files for our KR5 robot with the tool frame being our table tennis paddle, and is located in kuka_kr5/kuka_moveit_config. We found that the TRAC IK solver is significantly better than the default KDL solver, and in fact necessary to allow for real-time actuation. We also chose the RRTConnect planning algorithm and added the table tennis table as a box constraint so that the paddle would never hit the table during its trajectory.

Finally, we hooked MoveIt up to ros_control by creating a JointTrajectoryController which is used in the backend MoveGroup to actuate the motors given a planned trajectory. This is done through the MoveGroupCommander Python API, and in kuka_kr5/planning/src/robot_controller.py, where we wrote a move_to_goal function that brings the table tennis racquet to any feasible world coordinate. One important thing is that in order to increase the trajectory speed due to internal MoveIt limitations, we multiply the velocity of each waypoint by a user demanded motor speed.

Arm Trajectory Planner

Finally, we use the move_to_goal function to plan a trajectory that goes directly to the estimated ball position given by our trajectory prediction node. We execute the trajectory immediately after a good estimate of the ball state is published and the paddle will then wait at the position where the ball would hit the xz-plane. We next calculate the second arm trajectory, called the hit back trajectory, to actually hit the ball to the other side.

We first calculate the hit back orientation of the paddle using a heuristic based on the predicted ball's position and velocity. The code for this is located in kuka_kr5/planning/src/paddle_angle_dummy.py. The hit back trajectory is a simple straight line normal to the outward direction of the paddle, with distance and velocity also tuned from the predicted state of the ball. A major challenge we faced is to hit the ball back at the correct timing and make solid contact with the racquet. To properly do this, we utilize the trajectory duration that is produced by the MoveIt planner for the hit back trajectory. We ensure that the hit back trajectory finishes its execution 0.1 seconds after the predicted hit time, and this allows the racquet to have solid contact with the ball throughout the trajectory, and hitting the ball back in over 80% of the scenarios. The final controls pipeline is located at kuka_kr5/planning/src/ball_controller.py which incorporates both robot_controller.py and paddle_angle_dummy.py.

Ball Spawner

To actually spawn in ping pong balls for the robot to hit, we have a ball spawner utility which uses a Gazebo plugin to introduce velocity into our ball model file. The ball spawner is called using a Gazebo ROS service called spawn_urdf. The ball starting position and velocities can be manually adjusted from the CLI, or it can be randomized. This ball spawner introduces an element of randomness that enables us to thoroughly test our robotics pipeline with a variety of ball configurations. This file is located at kuka_kr5/kuka_kr5_gazebo/src/spawn_ball.py.

boop

Pictured: the robot hard at work reacting to randomly-generated balls.

Joy-Con Spawner

Our original goal was to also be able to serve the ball by a human flicking a Joy-Con, and have the robot return it. However, we ran out of time to finish it and were facing issues with its noisy acceleration sensors which provided bad velocity estimates.

Next: click here to read the results and conclusion >