You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for your great work!
I found that in the training process, the mesh vertex tensor sometimes receives a very large gradient, and the gradient value is usually as the form 2^n (maybe 4^n, i'm not sure) for some integer n
To reproduce, just add
Then the program will stop at the breakpoint when large gradient occurs.
Obtaining such a large gradient will hurt when parametrizing SDF using MLP, since the MLP will collapse after the optimizer step
I've tested on Windows 10, MSVC 14.35.32215, and torch2.0+cu11.8/torch1.13.0+cu11.6. I didn't test on cuda11.3 since I didn't find the way to install the corresponding version of tinycudann on Windows currently
Any advice? Thanks!
The text was updated successfully, but these errors were encountered:
It's very interesting observation! I also very often run into the issue that no mesh can be extracted. You can try to add clip_grad_norm, which clips the gradient to a value:
Hi! Thanks for your great work!
I found that in the training process, the mesh vertex tensor sometimes receives a very large gradient, and the gradient value is usually as the form 2^n (maybe 4^n, i'm not sure) for some integer n
To reproduce, just add
and run the example command:
Then the program will stop at the breakpoint when large gradient occurs.
Obtaining such a large gradient will hurt when parametrizing SDF using MLP, since the MLP will collapse after the optimizer step
I've tested on Windows 10, MSVC 14.35.32215, and torch2.0+cu11.8/torch1.13.0+cu11.6. I didn't test on cuda11.3 since I didn't find the way to install the corresponding version of tinycudann on Windows currently
Any advice? Thanks!
The text was updated successfully, but these errors were encountered: