-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wrong spatial alignment for visual inertial poses (T_W_E) and laser poses (T_B_H) #82
Comments
The problem is 2d lasers provide constraints only in xy motion and yaw, which causes the optimization problem to be degenerate and not converge. To solve this you would have to change the initial estimate and pass the appropriate priors to the optimization algorithm. |
Do you mean the dual-quaternion-based hand-eye calibration requires the sensor to rotate about at least two axes? As you suggested, the initial estimates contained in calibration.json was updated to the below values:
Then the below command was executed:
The resultant calibration_optimized.json has below content:
These values looks reasonable. For this test, the optimization algorithm indeed handles the degenerate case well. |
Yes, sounds reasonable. The idea of the hand eye calibration is to calibrate 2 fully 6DoF tracks, meaning it would require motion in all of the degrees of freedom. For the degenerate case indeed the first problem is coming up with an initial estimate. The optimizer could then theoretically handle the case. Did it in the end converge to some visually correct looking result? To have a guarantee of it working would require giving the optimizer some knowledge about the priors (for example an initial height estimate with a covariance). This does not mean that you could not obtain a result in some cases. But since without full 6DoF movement some of the variables are not fully constrained and there could be multiple solutions (or even an infinity), you have no clear guarantees. |
Don't think the alignment is correct. They are both on the same z plane indeed, but the thin red and green lines between the frames which represent the trajectories are not overlain. I still see about a 90 degree rotation offset between the aligned trajectories. |
I appreciate that you pointed out my misinterpretation of the figure. I used Horn's method implemented in https://vision.in.tum.de/data/datasets/rgbd-dataset/tools to get an initial rough value for T_H_E, and implemented a correlation approach to estimate the initial time offset, and finally updated calibration.json to
With the updated calibration.json, after running the batch_estimator, the calibration_optimized.json looks like:
And the results are visualized like follows. I believe these results look fine. FYI, the log for batch_estimator is also attached. One comment about the initial time alignment in this hand_eye_calibration repository. |
Dear author,
Thank you for open sourcing the hand eye calibration algorithm.
Following the instructions at README.md, I built the hand eye calibration package in Ubuntu 16.04 + ROS kinetic. The data was collected with a lidar (laser range finder) and a camera and an IMU, the three mounted rigidly to a moving platform.
Then the poses of the camera(eye) expressed in a world frame (the camera frame at the start), T_W_E, was obtained by a visual inertial odometry method with loop closure. And the poses of the lidar (hand) expressed in a base frame (the lidar frame at the start), T_B_H, was computed with a 2d lidar package. The data files are in the input directory of the attached zip file.
mydata.zip
In the end, the below command was executed.
And the resulting log, figures, and data files are in the output directory of the attached zip file.
It appears that the temporal calibration is done well. However, the spatial calibration is wildly wrong.
Can you please look into the issue? Am I missing certain points?
The text was updated successfully, but these errors were encountered: