You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
importnumpyasnpimporttensorflowastfimporttensorflow_model_optimizationastfmotdefcreate_model():
model=tf.keras.models.Sequential()
# For the model to later get converted, batch_size and sequence_length should be fixed.# E.g., batch_input_shape=[None, 1] will throw an error.# This is just a limitation when using RNNs. E.g., for FC or CNN we can have batch_size=Nonemodel.add(tf.keras.layers.Embedding(
input_dim=5,
output_dim=1,
batch_input_shape=[1, 1]
))
model.add(tf.keras.layers.LSTM(
units=1,
return_sequences=False,
stateful=False
))
model.add(tf.keras.layers.Dense(5))
returnmodelmodel=create_model()
model.summary()
model.save("/content/model/")
representative_data=np.random.randint(0, 5, (200, 1)).astype(np.float32)
defrepresentative_dataset():
forsampleinrepresentative_data:
sample=np.expand_dims(sample, axis=0) # batch_size = 1yield [sample] # set sample as first (and only) input of the model# float16 quantizationconverter=tf.lite.TFLiteConverter.from_saved_model("/content/model/")
converter.optimizations= [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types= [tf.float16]
# kernel runs out of memory and crashes in the following linetflite_quant_model=converter.convert()
The text was updated successfully, but these errors were encountered:
I have also encountered this problem using TensorFlow version 12.2.1 on my system. Non-optimized conversion works fine with LSTM, but float16 optimization is causing my kernel to crash repeatedly.
No matter the size of the LSTM model, converting it with float16 optimization runs out of memory.
Code to reproduce the issue
The code snippet to reproduce the issue on Google Colab
Code:
The text was updated successfully, but these errors were encountered: