-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory,when I pruned model test #19
Comments
What GPU are you using? |
have you solve this problem? I found that in finetune.py line172 and line174, this two backward operation will cause doubling the memory usage twice, increasing my memory from 3200MB to 7000MB than to 11000MB. The first increment is used when getting the pruning plan, so the gradient calculated is useless when finetuning, but I haven't found any way to clear that gradient cache. |
@Tianxiaomo hi, can you tell me the command of test model,i can‘t find it,thy. |
@CodePlay2016 I am facing the almost similar out-of-memory problem Could you comment about this ? Do you have any actual, working countermeasure so far ?
|
I use Tesla k80 -12G *4,When I prunning the training test everything was normal, but after the pruned test memory overflow.
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
File "/home/b418-xiwei/.pycharm_helpers/pydev/pydevd.py", line 1664, in
main()
File "/home/b418-xiwei/.pycharm_helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/b418-xiwei/.pycharm_helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/b418-xiwei/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/b418-xiwei/hgh/prune/finetune.py", line 343, in
fine_tuner.prune()
File "/home/b418-xiwei/hgh/prune/finetune.py", line 267, in prune
self.test()
File "/home/b418-xiwei/hgh/prune/finetune.py", line 187, in test
output = model(Variable(batch))
File "/home/b418-xiwei/anaconda3/envs/distiller/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/b418-xiwei/hgh/prune/finetune.py", line 68, in forward
x = self.features(x)
File "/home/b418-xiwei/anaconda3/envs/distiller/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/b418-xiwei/anaconda3/envs/distiller/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/b418-xiwei/anaconda3/envs/distiller/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/b418-xiwei/anaconda3/envs/distiller/lib/python3.6/site-packages/torch/nn/modules/pooling.py", line 142, in forward
self.return_indices)
File "/home/b418-xiwei/anaconda3/envs/distiller/lib/python3.6/site-packages/torch/nn/functional.py", line 360, in max_pool2d
ret = torch._C._nn.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58
I use batch_size=16,so it is
The text was updated successfully, but these errors were encountered: