-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ATOM alternative methods comparisons #891
Comments
Hi @miguelriemoliveira and @manuelgitgomes! Quick question about the first of these alternatives to ATOM: Livox is a targetless calibration method. As such, can we really compare it to ATOM and, perhaps more importantly, does it even make sense to compare these results? I assume targetless calibration is used for different situations than what we usually have with ATOM (at least most targetless calibration papers I've read dealt with situations where no patterns were available, like roadside LiDARs). |
As a note, I think we might encounter a problem later down the line with the OpenCV calibration. The OpenCV method does not "accept" partial detections and we only have 5 collections in our real riwmpbot dataset with non-partial detections of the the I don't think we should worry about this for now and instead focus on implementing these calibration alternatives, but it might be good to keep in mind to be able to plan ahead. Tagging @miguelriemoliveira and @manuelgitgomes for visibility. |
Hi @Kazadhum ,
You are right. It does not make sense to compare with targetless approaches or at least we can confidently say that at this stage we should first spend some time searching for target based methods. |
Right. We had this limitation from other approaches already for example when we used the opencv stereo camera calibration. Most of these methods use chessboards as patterns, and these do not support partial detections. |
I see. I think in order to test these alternatives during their implementation we can use datasets from simulated systems. Then, when they are working correctly, we can get new real datasets with more non-partial detections. What do you think? |
That's it. For datasets (real or sim) that are meant to be used by other approaches for comparison we need to make sure we have "enough" non-partial detections. |
Hi @Kazadhum and @manuelgitgomes , please create an issue for each method you start working on, and add the issue number on the checklist above. |
About lidar to camera calibration, this is useful (#915). |
Hi @miguelriemoliveira! Since I haven't worked on this for the past few weeks, I'm now checking that these methods work properly before running batch calibrations, starting with the OpenCV method for eye-to-hand calibration. Running: we get:
I wanted your opinion on if these values are plausible or if they are indicative of something not working correctly. They seem a bit high to me, but it is expected that ATOM yields better results... For the record, this method yields good results in the simulated cases, so I'm inclined to believe these values are valid. |
Hi @Kazadhum , My validation test for algorithms in using simulated data. If the algorithm works well with simulated data, then it should be ok. If the results in real data are not as good as expected, I would say that's due to the usual problems with real data. So, bottomline, results seem fine to me. |
Hi @miguelriemoliveira! That makes sense, thanks! In that case, all the OpenCV methods work properly and return OK results in the calibration of the real |
Hello @miguelriemoliveira and @manuelgitgomes! I was running batch executions of the real This script assumes a certain structure of the CSV results files which works great with the ATOM calibration, but not so much with other calibrations. Namely, it doesn't work for CSV files which don't have a So I can do two things. The first, and perhaps the most expeditious would be to chenge the OpenCV calibration script to output a CSV results file which conforms to the ATOM structure, maybe with a single "Collection" and an "Averages" row. Personally, I don't think it makes sense to do this, but it is a quick fix to this specific problem. What I feel is the preferrable solution here is to rework the What do you think, @miguelriemoliveira and @manuelgitgomes? |
A small correction to my previous comment is that it does not, in fact, need to have a row with the name "Averages" since we can specify the name of the needed row with a flag. |
I got it to work by adding the flag: atom/atom_batch_execution/scripts/process_results Lines 58 to 59 in f25c302
And I changed the instances of |
Great. Congrats! |
Thank you @miguelriemoliveira! In the meantime I realized that these results aren't representative of the actual errors, since the reprojection error for the real case of the |
OK, sounds good. |
The idea is to develop several alternative comparisons using implementations from the state of the art "converted" to atom, i.e., that generate an atom dataset which can be evaluated like our calibrations are.
Some alternatives:
Please add more that you can think of.
The text was updated successfully, but these errors were encountered: