-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenCV methods return unexpected results when calibrating the simulated RIWMPBOT #994
Comments
Hi @Kazadhum , I am going crazy thinking about these. I am considering the hyphotesis that its the real results that do not make sense. Can you tell me how to run these? I would need the real dataset as well |
The command I used for the real data:
If you want to test with different patterns and methods, change the I'll send you the dataset via e-mail right away! |
I think so! Seems like I didn't catch these when I filtered the dataset... |
For testing (delete collections 27 and 57 from dataset_filtered.json before doing this)....
Alternatively, instead of copying the TFs we can perform a sanity check by calibrating using OpenCV with the ATOM-calibrated dataset as input (there is no need to split into train and test here, since the
|
From this we should remove also collection 34 created dataset_filtered2.json without collections 27, 34 and 57. Its attached here. |
Hi @Kazadhum , not sure I agree with all the commands: Here my suggestions for change, including using the dataset_filtered2.json.
|
Hi @miguelriemoliveira! That command doesn't work for me, because the clear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot_real/diogo_tests" \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset_filtered2.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& mv $DATASET_PATH/atom_calibration.json \
$DATASET_PATH/dataset_patterns_corrected.json \
&& \
rosrun atom_evaluation copy_tfs_from_dataset.py \
-sd $DATASET_PATH/dataset_patterns_corrected.json \
-td $DATASET_PATH/dataset_filtered2.json \
-pll flange forearm_link upper_arm_link \
-cll charuco_200x200_8x8 charuco_170x100_3x6 charuco_200x120_3x6 \
&& \
mv $DATASET_PATH/dataset_filtered2_tfs_copied.json \
$DATASET_PATH/dataset_filtered_with_atom_patterns.json \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset_filtered_with_atom_patterns.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/dataset_filtered_with_atom_patterns_train.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-uic -mn tsai \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_tsai_rgb_world.json \
-test_json $DATASET_PATH/dataset_filtered_with_atom_patterns_test.json \
-uic -sfr -sfrn /tmp/single_rgb_evaluation.csv Regardless, I think I'm going to follow your suggestion and use the atom-calibrated dataset as input for the OpenCV calibrations to see how much their solutions deviate from ours. So, the command I'm running for this: clear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot_real/diogo_tests" \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset_filtered2.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/atom_calibration.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-ctgt -uic -mn tsai |
I have some results for the real system. Once again, I'm giving an ATOM-calibrated dataset as input for the OpenCV calibrations. Since these are closed-form solutions and are invariable w.r.t. an initial guess, the choice of calibrated dataset vs. un-calibrated dataset for the input does not impact the final result. The goal here is to use the For the Tsai method, here are my results: Upper Arm Patternclear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot_real/diogo_tests" \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset_filtered2.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/atom_calibration.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-ctgt -uic -mn tsai
Forearm Pattern
Hand Pattern Calibration
So I'd say this is following the expected behaviour. These methods were intended to be used with a pattern in the manipulator's end-effector. |
Hi @Kazadhum , I think what this proves is that we get a more similar result between Tsai and Atom when we use the hand, in comparison to when we use the upper arm. Do you agree? |
So I would be expecting to see a larger difference in the reprojection errors evaluation of atom vs Tsai for the upper arm in comparison to the atom vs Tsai for the hand Can you confirm this is what occurs? |
Hi @miguelriemoliveira! Yes, I agree with that! When it comes to reprojection errors, I believe we already have this data, but I'll run the calibrations anyways. So, we have the following after evaluation... ATOM
TsaiUpper Arm
Hand Pattern
So it checks out -- the calibration using the hand pattern is much closer in reprojection error to ATOM than the calibration using the upper arm pattern. |
Repeating this experiment for the simulated system to see if this behaviour is consistent when compared to the real system... ATOMexport DATASET_PATH=/home/diogo/atom_datasets/riwmpbot/train/ \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset_train.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/atom_calibration.json \
-test_json $DATASET_PATH/dataset_test.json \
-uic -sfr -sfrn /tmp/single_rgb_evaluation.csv
TsaiUpper Arm Patternclear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot/train" \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset_train.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/atom_calibration.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-ctgt -uic -mn tsai \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_tsai_rgb_world.json \
-test_json $DATASET_PATH/dataset_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv
Hand Patternclear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot/train" \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset_train.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/atom_calibration.json \
-c rgb_world -p hand_pattern \
-hl flange -bl base_link \
-ctgt -uic -mn tsai \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_tsai_rgb_world.json \
-test_json $DATASET_PATH/dataset_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv
So this is contradictory to the results from the real system. Strange. |
But you are not copying the atom estimated pattern transforms to the dataset before running the evaluation, right? |
You're absolutely right! Here are the new commands I run. First I calibrate with ATOM, then I copy the pattern TFs to the dataset, split it into the train and test dataset, calibrate using Tsai and then evaluate. Upper Arm Patternclear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot/train" \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& mv $DATASET_PATH/atom_calibration.json \
$DATASET_PATH/dataset_patterns_corrected.json \
&& \
rosrun atom_evaluation copy_tfs_from_dataset.py \
-sd $DATASET_PATH/dataset_patterns_corrected.json \
-td $DATASET_PATH/dataset.json \
-pll flange forearm_link upper_arm_link \
-cll charuco_200x200_8x8 charuco_170x100_3x6 charuco_200x120_3x6 \
&& \
mv $DATASET_PATH/dataset_tfs_copied.json \
$DATASET_PATH/dataset_with_atom_patterns.json \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset_with_atom_patterns.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/dataset_with_atom_patterns_train.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-ctgt -uic -mn tsai \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_tsai_rgb_world.json \
-test_json $DATASET_PATH/dataset_with_atom_patterns_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv
Hand Patternclear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot/train" \
&& \
rosrun atom_calibration calibrate \
-json $DATASET_PATH/dataset.json \
-uic -v -gtpp \
-jsf "lambda x: False" \
-jpsf "lambda x: False" \
&& mv $DATASET_PATH/atom_calibration.json \
$DATASET_PATH/dataset_patterns_corrected.json \
&& \
rosrun atom_evaluation copy_tfs_from_dataset.py \
-sd $DATASET_PATH/dataset_patterns_corrected.json \
-td $DATASET_PATH/dataset.json \
-pll flange forearm_link upper_arm_link \
-cll charuco_200x200_8x8 charuco_170x100_3x6 charuco_200x120_3x6 \
&& \
mv $DATASET_PATH/dataset_tfs_copied.json \
$DATASET_PATH/dataset_with_atom_patterns.json \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset_with_atom_patterns.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_evaluation cv_eye_to_hand.py \
-json $DATASET_PATH/dataset_with_atom_patterns_train.json \
-c rgb_world -p hand_pattern \
-hl flange -bl base_link \
-ctgt -uic -mn tsai \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_tsai_rgb_world.json \
-test_json $DATASET_PATH/dataset_with_atom_patterns_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv
The errors are slightly different but not significantly and the same strange observation remains, it seems... |
Hello @miguelriemoliveira! I have new results for the simulated system using the Shah and Li methods, implemented in #995, but the same observation can still be made... Since these methods estimate the pattern's pose, it isn't necessary to copy these TF's from a previous ATOM calibration. The command I used was the following: clear && export DATASET_PATH="$ATOM_DATASETS/riwmpbot/train" \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_evaluation cv_eye_to_hand_robot_world.py \
-json $DATASET_PATH/dataset_train.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-ctgt -uic -mn shah \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_shah_rgb_world.json \
-test_json $DATASET_PATH/dataset_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv The only thing I changed to get these results was the pattern. So we have the following... Upper Arm Pattern
Hand Pattern
Now testing with the real data... |
Now with the real dataset... Upper Arm Pattern
Hand Pattern
|
The command I used for testing with the real dataset for the hand pattern was: export DATASET_PATH=$ATOM_DATASETS/riwmpbot_real/diogo_tests/ \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset_filtered2.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_evaluation cv_eye_to_hand_robot_world.py \
-json $DATASET_PATH/dataset_filtered2_train.json \
-c rgb_world -p hand_pattern \
-hl flange -bl base_link \
-ctgt -uic -mn shah \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_shah_rgb_world.json \
-test_json $DATASET_PATH/dataset_filtered2_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv and for the Upper Arm Pattern: export DATASET_PATH=$ATOM_DATASETS/riwmpbot_real/diogo_tests/ \
&& \
rosrun atom_calibration split_atom_dataset \
-json $DATASET_PATH/dataset_filtered2.json \
-tp 70 \
-ss 1 \
&& \
rosrun atom_evaluation cv_eye_to_hand_robot_world.py \
-json $DATASET_PATH/dataset_filtered2_train.json \
-c rgb_world -p upper_arm_pattern \
-hl upper_arm_link -bl base_link \
-ctgt -uic -mn shah \
&& \
rosrun atom_evaluation single_rgb_evaluation \
-train_json $DATASET_PATH/hand_eye_shah_rgb_world.json \
-test_json $DATASET_PATH/dataset_filtered2_test.json \
-uic -cpt -sfr -sfrn /tmp/single_rgb_evaluation.csv For the forearm (added by Miguel)
Note that the pattern transformations from the robot-world-hand-eye calibration are used for evaluation, since I'm using the |
Hi @Kazadhum , I am a bit lost. The results for me seem very good. They make sense. This is with the upper arm: Only the upper arm has good reprojection error of 2.1. All others have large errors which makes perfect sense (since we are calibrating the upper arm). As for the forearm: Again, the foream has very low error 6.6 and the upperarm (which is behind in the kinematic chain) also has small error. The hand, which is after in the kinematic chain, has a very large error (42.6). For the hand results Hand, very small results (2.9), and the others also small results because they are behind in the kinemaic chain. These results make perfect sense, and are not like the ones in the results excel. |
I also revisited the sim results in your comments above and I also think we can explain those well. Let's talk this afternoon. |
Hi @miguelriemoliveira!
Calibrations using either the upper arm or forearm pattern achieve a better comparison to the GT than using the hand pattern. This is wierd because these methods were clearly designed to use a pattern at the end of the kinematic chain. No joint noise is used here.
Critically, this does not occur when calibrating using real data.
Calibrating: Simulated, no joint noise added, upper arm pattern, Tsai method
Results:
Calibrating: Simulated, no joint noise added, forearm pattern, Tsai method
Results:
Calibrating: Simulated, no joint noise added, hand pattern, Tsai method
Results:
The text was updated successfully, but these errors were encountered: