-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can we prune pre-trained model like VGG16 etc... using this optimization library #40
Comments
Firstly, Secondly, by pruning the whole model, you don't get to specify which layers you want to prune; it is only necessary to prune layers that have a high number of trainable parameters. In my code, I only prune pointwise convolutional layers as they contain 76% of the model's parameters.
Lastly, I forgot to mention, you will want to use keras from tensorflow.python; |
I also forgot to mention, you want to initialize the sequential model as tf.keras.Sequential() not keras.Sequential() |
I'm using tf=2.0.0 library and get same error:
My code looks like this:
|
In general, yes you can. There are some caveats (e.g. lack of subclassed model support / nesting of models within models like in both examples (tejalal@ and Cospel@). Created #155 in light of this for making subclassed support better. |
Thank you @alanchiao. Most of the models nowadays are models that are subclassed or nested. It will be very useful if we could prune them. |
@nutsiepully, @raziel for visibility |
We understand the need. The caveat is that going subclass then basically diminishes the usability of Keras abstractions we are using. Our suggestion, for now, would be to abstract some of the subclass logic into keras layers and then apply the pruning in the same manner as we currently do for the built in layers. @nutsiepully wdyt? Do we have an example to point folks to? |
Sorry, I seem to have missed this issue. For now as @raziel suggested, the best approach is to apply pruning on a per-layer basis. You can choose the layers most important to you and just prune them. For parts of your model that are purely custom, you can use the |
Thank you @s36srini for sharing the code. It works well for MobileNet, but it fails for MobileNetV2. Because we cannot |
hello, so what's the correct way of getting past the above error -
when we use I get the same error when I try to define the layers I need for pruning efficientnet-B6. What I do is the following -
It gives me an error pointing at I peeked at the code of Efficientnet and it looks like it's using Functional API. Could maxing Sequential API with Functional API result in errors like these? |
Hi, I have a trained frozen model, is it possible to prune it, Any references will be a great help Thanks. |
Hi everyone :) Is this expected behaviour at all for nested models? Because I would think that if any layer in a model has that wrapper, then it will be pruned when the pruning callback is called in the training phase. Unfortunately, this does not happen. Instead everything not nested (that have pruning wrappers) do prune, and anything inside a nested model does not. I can also confirm that if I create a model with no nested models at all, then everything I set to prune does in fact prune the way it should. Side note: If anyone perhaps have a solution to this or workaround that would seriously be very helpful, thank you. |
I tried to create a model like:
and when I call
it generates error as:
Please initialize
Prune
with a supported layer. Layers should either be aPrunableLayer
instance, or should be supported by the PruneRegistry. You passed: <class 'tensorflow.python.keras.engine.sequential.Sequential'>The text was updated successfully, but these errors were encountered: