-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zau batch preparations #987
Comments
Hi @brunofavs! I see you have the rgb-rgb evaluations working. What results have you gotten for these? |
Hi @brunofavs , I would not go with a makefile but if it works for you great. |
Hey @Kazadhum, I'm first testing which evaluation scripts are working before making conclusions. Haven't gotten results to show yet. |
Rgb to lidar evaluation is working. #988 fixed it. |
Nice. Thanks. |
Hey @miguelriemoliveira @Kazadhum ! I have a lot of results to show. We should meet soon. I have a personal matter tomorrow morning and classes on Tuesday afternoon. Other than that, Im free all week. I have results from every evaluation for RNM values between 1 and 0.00005. I have a lot of raw data. Until the meeting Im working on developing the best method to visualize it. |
Hi @brunofavs! Could you post the results from the evaluations you ran for RNM 0.125? |
Thank you! I can meet anytime to discuss these! |
Hi @brunofavs , Thanks for the results. We can meet tomorrow morning? At 9h is fine by me ... |
Hey @miguelriemoliveira , tomorrow at 9am is great. |
Tomorrow at 9 AM sounds good! |
Hey @miguelriemoliveira @Kazadhum :) Turns out my plotting code was failing because I decided to align the data csv before our meeting to make it more readable and saved it on accident, thus changing the whitespace header. Oh well.... Anyway, this is the heatmap I wanted to show you : The color represents the error, meaning a darker color square has higher error. It's hard to figure out the best RNM from this but we can see the influence it has on each evaluation. And this is the plot I showed : And this is the pivot table I showed :
|
Thanks @brunofavs . Nice plot. I think the conclusions we extracted during the meeting remain the same, so keep up the good work. |
Closing as we've moved to the next phase already. |
Hey @miguelriemoliveira @Kazadhum,
Since our last meeting I was meaning to do a issue summing up the tasks we planned to do but I forgot. This is a followup to #986 .
Write a organized script to test manually dataset splitting, calibration and evaluation.
I ditched the one bash script for everything ideia we were working on for something more structured and easier to interpret.
So I tinkered and came up with a makefile I'm pretty happy with :
With this script, one can choose to run the whole pipeline, or just all evaluations, of just a certain type of evaluations. All the env variables declared in the beginning are available to every target in the makefile (including the scripts called within it). Each env variable can also be changed in runtime when calling the makefile like this example:
make RNM=0.0001 calibrate
For the individual evaluations, I made two simple scripts that takes in 2 arguments to make the makefile less clutered.
The other script for rgb_to_lidar_eval is a sibling script of this one shown here. I didn't make similar scripts for the
calibrate
andsplit
because the code would not repeated and it would hurt readability IMO.This method is also easy to expand, so I think its the best midground between doing everything manual and doing batches.
Annotate the filtered dataset
This is done for
rgb_body_left
,rgb_body_right
,rgbd_hand_color
andrgbd_hand_color
.Test each type of evaluation to assess if they are working as intended.
The text was updated successfully, but these errors were encountered: