You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, this is an amazing work. I want to use this technique to segment the real-world data made by myself, and how to achieve it. Is there any tutorial to preprocess our own data?
The text was updated successfully, but these errors were encountered:
Hello, it is possible to segment your own data with SA3D.
You may try to process your data with colmap to estimate the camera parameters and store the processed data like the data structure introduced in our README. Then you can follow our provided instruction to train and segment the NeRF.
Oh, thank you. Does this mean that if I use colmap to get the transform.json, that's enough for me to train my data? Do I need to downscale images to get images_x? If I need to, how do I get them, and what rules should I follow? Moreover, should I write a specific configure file like fern.py to make training correctly run?
Yes, the images_xs are not necessary, except that your images are too large.
You do need the config file for training on your own data. If your images are forward-facing then check the configs for the llff dataset and if they are 360-degree you can check the configs in nerf_unbounded.
Hi, this is an amazing work. I want to use this technique to segment the real-world data made by myself, and how to achieve it. Is there any tutorial to preprocess our own data?
The text was updated successfully, but these errors were encountered: