-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The error samples are due to issues with the ground truth annotations rather than errors in the model predictions. #24
Comments
Hello @libenchong, Thank you again for your valuable feedback. Yes, I've noticed some errors in that regard but never did a thorough investigation like you did. That would make a great study case I think. Best regards, |
Hello @libenchong, Great observation! In most benchmarks, the ground truth (GT) typically includes images located within a 25-meter perimeter from the query, based on GPS coordinates. However, since Mapillary images are primarily crowdsourced from various sources like phones and dashboard cameras, mapillary_sls often exhibit a considerable amount of noise, as demonstrated in the example you provided. Although the model's prediction appears to be close to the query (judging by the door on the right), the noisy coordinates associated with the image indicate a distance greater than 25 meters, resulting in an inaccurate label. I would propose manual verification when creating a benchmark, specifically for positive matches within the 25-40 meter range from the query, particularly when we are relying solely GPS coordinates that are prone to a lot of noise. This way, we can ensure a more precise evaluation of the model's performance. Additionally, it's worth noting that these errors in the labels apply to all evaluated techniques and might affect their performance in a similar manner. This helps maintain fairness in the evaluation process. Again, thank you for your valuable observations. |
hollow,@libenchong. i'm retry this project. But Ican't find the GT about pitts30K. Can you give me a link to download the GT about pitts30k. Thank you very much! |
Hello @huachaoguang, you can refer to another project by the MixVPR author called gsv-cities, where you can find the GT files you need in the datasets folder. Additionally, several better-performing VPR methods have been published at this year's CVPR, including another paper by the MixVPR author titled Bag-of-Queries. |
hallo,@libenchong, thank you for you reply. Last day I find the the way to resolve the error. I also want to retry the new method like Bag-of-queries and anyloc. |
Hello@amaralibey, I have selected the failure samples from the MSLS validation set, pitts30k test set and pitts250 test set where the recall1 of the mixvpr model failed. I found that for a large part of these error samples, the recall1 images given by the model are actually correct, but they are not in the ground truths. In other words, the cause of these error samples is the problem of the ground truths, not the errors of model prediction, which comparing image similarity cannot solve.
The text was updated successfully, but these errors were encountered: