Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation scripts #78

Open
Zhao-Yian opened this issue Jun 14, 2024 · 7 comments
Open

Evaluation scripts #78

Zhao-Yian opened this issue Jun 14, 2024 · 7 comments

Comments

@Zhao-Yian
Copy link

Can evaluation scripts be provided on different datasets to validate the quantitative results provided in the paper?

@Jumpat
Copy link
Owner

Jumpat commented Jun 14, 2024

Hi, the evaluation script is incompatible with this GUI version, which has gone through code refactoring and abandoned many interface for evaluation.

The evaluation mainly involves loading different datasets like SPIN and NVOS, while the metric (IoU) calculation is relatively easy. We may re-write it in the future.

@Zhao-Yian
Copy link
Author

Thanks!

@Zhao-Yian
Copy link
Author

Sorry to ask again, which reference views and target views are used for evaluation on the SPIn-NeRF dataset? What was the basis for the selection?

@Jumpat
Copy link
Owner

Jumpat commented Jun 15, 2024

Hi, the reference view is set to the first frame of the sorted views. However, the method is robust to reference view selection since the segmentation target is relatively simple.

@Zhao-Yian
Copy link
Author

Thank you very much for your answer. May I ask if the target views used for evaluation are all views except the first frame? And the IoU of each scene is the average IoU of these target views?

@Jumpat
Copy link
Owner

Jumpat commented Jun 16, 2024

No, the target views include the reference view generally. Since though the reference view has gt mask for reference, the segmentation cannot ensure the final result align with the initial 2D mask. It is still meaningful to check whether the reference view is segmented properly.

The IoU score is calculated across all views, not (IoU_1+...+IoU_N) / N. There is a little difference since (a/b+c/d)/2 != (a+c) / (b+d). We use the latter implementation.

@Zhao-Yian
Copy link
Author

Thank you very much for your answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants