Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad result on custom dataset #34

Open
hongju91 opened this issue May 10, 2024 · 5 comments
Open

Bad result on custom dataset #34

hongju91 opened this issue May 10, 2024 · 5 comments

Comments

@hongju91
Copy link

hongju91 commented May 10, 2024

Hi, I am trying to run the model with my own dataset, which is similar to the NHR dataset with 80 2k cameras surrounding a person. I found the metrics stop to grow after only 3 or 4 epochs, and in the results, it seems the size of every point is big, it's clear to see each point and the space between them. Could you give some suggestion on what might be causing the problem? A bad calibration or wrong config parameters? Thanks a lot for your help.
微信图片_20240510183532

@dendenxu
Copy link
Member

Hi, looks like the number of points is too small. We typically use 26w (260000) points per frame for such datasets.
You can try lowering the voxel size during visual hull reconstruction to get a more dense initial point cloud estimation.

@dendenxu
Copy link
Member

dendenxu commented May 22, 2024

The process for extracting vhulls is detailed here.

@hongju91
Copy link
Author

Thanks I will try that to see if I can get some better results. Besides, is there any suggestions on how to capture the images, for instance: the light condition, the percentage of image that occupied the human, etc...? Thank you!

1 similar comment
@hongju91
Copy link
Author

Thanks I will try that to see if I can get some better results. Besides, is there any suggestions on how to capture the images, for instance: the light condition, the percentage of image that occupied the human, etc...? Thank you!

@dendenxu
Copy link
Member

Typically, what matters more on the dataset side is the quality of camera synchronization and camera parameters.
The lighting condition and ratio of human pixels depend more on the image quality produced by your camera sensor. As long as they're shared across these multiple views, the lighting condition shouldn't impact the reconstruction too much. As for the ratio of human pixels, it should be as large as possible if you're only interested in rendering the foreground humans & objects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants