Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ATOM alternative methods comparisons #891

Open
1 of 9 tasks
miguelriemoliveira opened this issue Mar 21, 2024 · 17 comments
Open
1 of 9 tasks

ATOM alternative methods comparisons #891

miguelriemoliveira opened this issue Mar 21, 2024 · 17 comments
Assignees
Labels
enhancement New feature or request

Comments

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Mar 21, 2024

The idea is to develop several alternative comparisons using implementations from the state of the art "converted" to atom, i.e., that generate an atom dataset which can be evaluated like our calibrations are.

Some alternatives:

Please add more that you can think of.

@Kazadhum
Copy link
Collaborator

Hi @miguelriemoliveira and @manuelgitgomes! Quick question about the first of these alternatives to ATOM: Livox is a targetless calibration method. As such, can we really compare it to ATOM and, perhaps more importantly, does it even make sense to compare these results? I assume targetless calibration is used for different situations than what we usually have with ATOM (at least most targetless calibration papers I've read dealt with situations where no patterns were available, like roadside LiDARs).

@Kazadhum
Copy link
Collaborator

As a note, I think we might encounter a problem later down the line with the OpenCV calibration. The OpenCV method does not "accept" partial detections and we only have 5 collections in our real riwmpbot dataset with non-partial detections of the the hand_pattern.

I don't think we should worry about this for now and instead focus on implementing these calibration alternatives, but it might be good to keep in mind to be able to plan ahead.

Tagging @miguelriemoliveira and @manuelgitgomes for visibility.

@miguelriemoliveira
Copy link
Member Author

Hi @Kazadhum ,

Livox is a targetless calibration method. As such, can we really compare it to ATOM and, perhaps more importantly, does it even make sense to compare these results? I assume targetless calibration is used for different situations than what we usually have with ATOM (at least most targetless calibration papers I've read dealt with situations where no patterns were available, like roadside LiDARs).

You are right. It does not make sense to compare with targetless approaches or at least we can confidently say that at this stage we should first spend some time searching for target based methods.

@miguelriemoliveira
Copy link
Member Author

As a note, I think we might encounter a problem later down the line with the OpenCV calibration. The OpenCV method does not "accept" partial detections and we only have 5 collections in our real riwmpbot dataset with non-partial detections of the the hand_pattern.

I don't think we should worry about this for now and instead focus on implementing these calibration alternatives, but it might be good to keep in mind to be able to plan ahead.

Right. We had this limitation from other approaches already for example when we used the opencv stereo camera calibration. Most of these methods use chessboards as patterns, and these do not support partial detections.

@Kazadhum
Copy link
Collaborator

Right. We had this limitation from other approaches already for example when we used the opencv stereo camera calibration. Most of these methods use chessboards as patterns, and these do not support partial detections.

I see. I think in order to test these alternatives during their implementation we can use datasets from simulated systems. Then, when they are working correctly, we can get new real datasets with more non-partial detections. What do you think?

@miguelriemoliveira
Copy link
Member Author

I see. I think in order to test these alternatives during their implementation we can use datasets from simulated systems. Then, when they are working correctly, we can get new real datasets with more non-partial detections. What do you think?

That's it. For datasets (real or sim) that are meant to be used by other approaches for comparison we need to make sure we have "enough" non-partial detections.

@miguelriemoliveira
Copy link
Member Author

Hi @Kazadhum and @manuelgitgomes ,

please create an issue for each method you start working on, and add the issue number on the checklist above.

@miguelriemoliveira
Copy link
Member Author

miguelriemoliveira commented Apr 1, 2024

About lidar to camera calibration, this is useful (#915).

@Kazadhum
Copy link
Collaborator

Hi @miguelriemoliveira! Since I haven't worked on this for the past few weeks, I'm now checking that these methods work properly before running batch calibrations, starting with the OpenCV method for eye-to-hand calibration.

Running:
rosrun atom_evaluation cv_eye_to_hand.py -json $ATOM_DATASETS/riwmpbot_real/merged/dataset.json -c rgb_world -p hand_pattern -hl flange -bl base_link -ctgt -uic

we get:

Deleted collections: ['001', '036', '037', '038']: at least one detection by a camera should be present.
After filtering, will use 59 collections: ['000', '002', '003', '004', '005', '006', '007', '008', '009', '010', '011', '012', '013', '014', '015', '016', '017', '018', '019', '020', '021', '022', '023', '024', '025', '026', '027', '028', '029', '030', '031', '032', '033', '034', '035', '039', '040', '041', '042', '043', '044', '045', '046', '047', '048', '049', '050', '051', '052', '053', '054', '055', '056', '057', '058', '059', '060', '061', '062']
Selected collection key is 000
Ground Truth b_T_c=
[[ 0.     0.259 -0.966  0.95 ]
 [ 1.     0.     0.     0.35 ]
 [ 0.    -0.966 -0.259  0.8  ]
 [ 0.     0.     0.     1.   ]]
estimated b_T_c=
[[ 0.114  0.186 -0.976  0.912]
 [ 0.993 -0.057  0.105  0.328]
 [-0.036 -0.981 -0.191  0.79 ]
 [ 0.     0.     0.     1.   ]]
Etrans = 5.079 (mm)
Erot = 4.393 (deg)
+----------------------+-------------+---------+----------+-------------+------------+
|      Transform       | Description | Et0 [m] |  Et [m]  | Rrot0 [rad] | Erot [rad] |
+----------------------+-------------+---------+----------+-------------+------------+
| world-rgb_world_link |  rgb_world  |   0.0   | 0.019314 |     0.0     |  0.077225  |
+----------------------+-------------+---------+----------+-------------+------------+
Saved json output file to /home/diogo/atom_datasets/riwmpbot_real/merged/hand_eye_tsai_rgb_world.json.

I wanted your opinion on if these values are plausible or if they are indicative of something not working correctly. They seem a bit high to me, but it is expected that ATOM yields better results...

For the record, this method yields good results in the simulated cases, so I'm inclined to believe these values are valid.

@miguelriemoliveira
Copy link
Member Author

Hi @Kazadhum ,

My validation test for algorithms in using simulated data. If the algorithm works well with simulated data, then it should be ok. If the results in real data are not as good as expected, I would say that's due to the usual problems with real data.

So, bottomline, results seem fine to me.

Kazadhum added a commit that referenced this issue Jun 13, 2024
@Kazadhum
Copy link
Collaborator

Hi @miguelriemoliveira! That makes sense, thanks!

In that case, all the OpenCV methods work properly and return OK results in the calibration of the real riwmpbot. I'm now debugging the Li and Shah methods...

@Kazadhum
Copy link
Collaborator

Kazadhum commented Jun 20, 2024

Hello @miguelriemoliveira and @manuelgitgomes!

I was running batch executions of the real riwmpbot calibrations using the 5 OpenCV methods and I noticed something about the process_results script.

This script assumes a certain structure of the CSV results files which works great with the ATOM calibration, but not so much with other calibrations. Namely, it doesn't work for CSV files which don't have a Collection # column or an Averages row.

So I can do two things. The first, and perhaps the most expeditious would be to chenge the OpenCV calibration script to output a CSV results file which conforms to the ATOM structure, maybe with a single "Collection" and an "Averages" row. Personally, I don't think it makes sense to do this, but it is a quick fix to this specific problem.

What I feel is the preferrable solution here is to rework the process_results script to become agnostic to the specific structure of the CSV files.

What do you think, @miguelriemoliveira and @manuelgitgomes?

@Kazadhum
Copy link
Collaborator

A small correction to my previous comment is that it does not, in fact, need to have a row with the name "Averages" since we can specify the name of the needed row with a flag.

Kazadhum added a commit that referenced this issue Jun 20, 2024
@Kazadhum
Copy link
Collaborator

Kazadhum commented Jun 20, 2024

I got it to work by adding the flag:

ap.add_argument("-ctc", "--column_to_check", help="Name of the column to check for the average row. Default is 'Collection #', which is the case for ATOM result processing.",
required=False, default='Collection #', type=str)

And I changed the instances of 'Collection #' in the code with this argument. This effectively works, but maybe some variable/argument renaming is warranted. I'll describe the batch executions run and their results in a separate issue.

@miguelriemoliveira
Copy link
Member Author

Great. Congrats!

@Kazadhum
Copy link
Collaborator

Kazadhum commented Jun 20, 2024

Thank you @miguelriemoliveira! In the meantime I realized that these results aren't representative of the actual errors, since the reprojection error for the real case of the riwmpbot_real system isn't implemented in the OpenCV calibration script. What I can do is to run these experiments for the simulated system, which are results we will also need. I'll work on implementing these comparisons for the real systems in the meantime.

@miguelriemoliveira
Copy link
Member Author

OK, sounds good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

3 participants