Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we test this model based on the real world data made by ourselves? #59

Open
wzq20030207 opened this issue Feb 22, 2024 · 3 comments

Comments

@wzq20030207
Copy link

Hi, this is an amazing work. I want to use this technique to segment the real-world data made by myself, and how to achieve it. Is there any tutorial to preprocess our own data?

@Jumpat
Copy link
Owner

Jumpat commented Feb 23, 2024

Hello, it is possible to segment your own data with SA3D.

You may try to process your data with colmap to estimate the camera parameters and store the processed data like the data structure introduced in our README. Then you can follow our provided instruction to train and segment the NeRF.

@wzq20030207
Copy link
Author

Oh, thank you. Does this mean that if I use colmap to get the transform.json, that's enough for me to train my data? Do I need to downscale images to get images_x? If I need to, how do I get them, and what rules should I follow? Moreover, should I write a specific configure file like fern.py to make training correctly run?

@Jumpat
Copy link
Owner

Jumpat commented Feb 24, 2024

Yes, the images_xs are not necessary, except that your images are too large.

You do need the config file for training on your own data. If your images are forward-facing then check the configs for the llff dataset and if they are 360-degree you can check the configs in nerf_unbounded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants