You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have I written custom code (as opposed to using a stock example script provided in TensorFlow.js): Modified version of the @tensorflow/tfjs npm page (https://www.npmjs.com/package/@tensorflow/tfjs) example
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS 16.6.1
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
TensorFlow.js installed from (npm or script link): script link
Describe the current behavior
After loading and using a LayersModel I call model.dispose() and tf.disposeVariables() to release the tf memory. However, if I reload the model to use it again memory is leaked, at least 16k of Array data. This occurs each time around the loop.
Describe the expected behavior
I would not expect a memory leak and would expect it to behave the same as if the model was just reused.
Comparing sample 2 with sample 1 shows the "Array" objects has increased by about 16K
Steps to demonstrate model reuse with minimal memory grown
Load the page in chrome and open devtools (or do a page reload)
Click "Run Test2" button.
Garbage collect and take a memory sample.
Click "Run Test2" button again.
Garbage collect and take another memory sample.
Comparing sample 2 with sample 1 shows the "Array" objects has increased only by about 96
Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered:
Hi,
It's not the number of tensors left in the GPU that's the issue. It's the number of Javascript Array objects that are not garbage collected.
This image shows the procedure for Test1 holding onto 16K Array objects.
This image shows the procedure for Test2 only holding onto 96 Array objects and the only difference is we do not dispose of and reload the model for each iteration..
System information
Describe the current behavior
After loading and using a LayersModel I call model.dispose() and tf.disposeVariables() to release the tf memory. However, if I reload the model to use it again memory is leaked, at least 16k of Array data. This occurs each time around the loop.
Describe the expected behavior
I would not expect a memory leak and would expect it to behave the same as if the model was just reused.
Standalone code to reproduce the issue
The url "https://vykingsneakerkitnative.s3.eu-central-1.amazonaws.com/SteveTest/tmp/tf-leak-test.html" demonstrates the problem.
Steps to demonstrate:
Steps to demonstrate model reuse with minimal memory grown
Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: