Skip to content

Commit

Permalink
update review comments
Browse files Browse the repository at this point in the history
  • Loading branch information
divyashreepathihalli committed Nov 9, 2023
1 parent d7cd46a commit a1a11c1
Showing 1 changed file with 63 additions and 59 deletions.
122 changes: 63 additions & 59 deletions guides/keras_core/migration_guide_tf_keras_3_0.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
"""
Title: Migration guide : Tensorflow to Keras 3.0
Title: Migration guide : TensorFlow-only Keras 2 code to multi-backend Keras 3
Author: [Divyashree Sreepathihalli](https://github.com/divyashreepathihalli)
Date created: 2023/10/23
Last modified: 2023/10/30
Description: Instructions on migrating your TensorFlow code to Keras 3.0.
Description: Instructions on migrating your Keras 2 code to multi-backend Keras 3.
Accelerator: None
"""

"""
This guide will help you migrate code from TensorFlow 2.x to keras 3.0. The overhead for
the migration is minimal. But once you have migrated you can run Keras workflows on top
of arbitrary frameworks — starting with TensorFlow, JAX, and PyTorch.
This guide will help you migrate TensorFlow-only Keras 2 code to multi-backend Keras
3 code. The overhead for the migration is minimal. But once you have migrated
you can run Keras workflows on top of either JAX, TensorFlow, or PyTorch.
Keras 3.0 is also a drop-in replacement for `tf.keras`, with near-full backwards
compatibility with `tf.keras` code when using the TensorFlow backend. In the vast
majority of cases you can just start importing it via `import keras` in place of `from
tensorflow import keras` and your existing code will run with no issue — and generally
with slightly improved performance, thanks to XLA compilation.
Keras 3 is intended to be a drop-in replacement for Keras 2 when using the TensorFlow
backend, with near-full backwards compatibility with `tf.keras` code when using
the TensorFlow backend. In most cases you can just start importing it via `import keras`
in place of `from tensorflow import keras` and your existing code will run with
no issue — and generally with slightly improved performance, thanks to XLA compilation.
Commonly encountered issues and frequently asked questions can be located in
the following links.
Expand All @@ -32,13 +32,13 @@
First, lets install keras-nightly.
We're going to be using the Tensorflow backend here -- but you can edit the string below
to "tensorflow" or "torch" and hit "Restart runtime", once you have migrated your code
and your code will run just the same!
This example uses the TensorFlow backend (os.environ["KERAS_BACKEND"] = "tensorflow").
After you've migrated your code, you can change the "tensorflow" string to "torch"
and click "Restart runtime", and your code will run on the PyTorch backend!
"""

"""shell
! pip install keras-nightly
pip install keras-nightly
"""

import os
Expand All @@ -50,7 +50,7 @@
import numpy as np

"""
## Switching Tensorflow code to Keras 3.0 - Tensorflow backend
## Switching TensorFlow code to Keras 3.0 - TensorFlow backend
Follow these instructions to migrate your existing TensorFlow code to Keras 3.0 and run
it with the TensorFlow backend:
Expand All @@ -63,10 +63,10 @@
"""

"""
## Migration incompatabilities : `tf.keras` to `keras 3.0`
## Migration incompatabilities : `keras 2` and `keras 3`
Keras 3 is a significant milestone in the evolution of the Keras API. It features a
number of cleanups and modernizations that have resulted in a few breaking changes
compared to Keras 2. All APIs that were removed were dropped due to extremely low usage.
compared to Keras 2. All APIs that were removed were dropped due to low usage.
The following list provides a comprehensive overview of the breaking changes in Keras 3.
While the majority of these changes are unlikely to affect most users, a small number of
Expand All @@ -82,19 +82,21 @@
JIT compilation can improve the performance of some models. However, it may not work with
all TensorFlow operations. If you are using a custom model or layer and you see an
XLA-related error, you may need to set the jit_compile argument to False. Here is a list
of known issues encountered when using xla with tensorflow backend -
https://www.tensorflow.org/xla/known_issues. In addition to these issues, there are some
of [known issues](https://www.tensorflow.org/xla/known_issues) encountered when
using xla with TensorFlow backend. In addition to these issues, there are some
ops that are not supported by XLA.
The error message you could encounter would be as follows:
```Detected unsupported operations when trying to compile graph
__inference_one_step_on_data_125[] on XLA_CPU_JIT```
```python
Detected unsupported operations when trying to compile graph
__inference_one_step_on_data_125[] on XLA_CPU_JIT
```
The following snippet of code will reproduce the above error:
```
```python
class MyModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
Expand Down Expand Up @@ -144,7 +146,7 @@ def call(self, inputs):
The error message you could encounter would be as follows:
```
```python
ValueError: Invalid filepath extension for saving. Please add either a `.keras` extension
for the native Keras format (recommended) or a `.h5` extension. Use
`tf.saved_model.save()` if you want to export a SavedModel for use with
Expand All @@ -153,7 +155,7 @@ def call(self, inputs):
The following snippet of code will reproduce the above error:
```
```python
sequential_model = keras.Sequential([
keras.layers.Dense(2)
])
Expand All @@ -175,7 +177,7 @@ def call(self, inputs):
Loading a TF SavedModel file via keras.models.load_model() is no longer supported
if you try to use `keras.models.load_model` you would get the following error
```
```python
ValueError: File format not supported: filepath=saved_model. Keras 3 only supports V3
`.keras` files and legacy H5 format files (`.h5` extension). Note that the legacy
SavedModel format is not supported by `load_model()` in Keras 3. In order to reload a
Expand All @@ -186,7 +188,7 @@ def call(self, inputs):
The following snippet of code will reproduce the above error:
```
```python
keras.models.load_model("saved_model")
```
"""
Expand All @@ -203,11 +205,11 @@ def call(self, inputs):
"""
### Nested inputs to Model()
Model() can no longer be passed deeply nested inputs/outputs (nested more than 1 level
deep, e.g. lists of lists of tensors)
deep, e.g. lists of lists of tensors).
you would encounter errors as follows:
You would encounter errors as follows:
```
```python
ValueError: When providing `inputs` as a dict, all values in the dict must be
KerasTensors. Received: inputs={'foo': <KerasTensor shape=(None, 1), dtype=float32,
sparse=None, name=foo>, 'bar': {'baz': <KerasTensor shape=(None, 1), dtype=float32,
Expand All @@ -217,7 +219,7 @@ def call(self, inputs):
The following snippet of code will reproduce the above error:
```
```python
inputs = {
"foo": keras.Input(shape=(1,), name="foo"),
"bar": {
Expand Down Expand Up @@ -250,8 +252,8 @@ def call(self, inputs):
layers. In Keras 3, it is not. This means you may have to use cond ops if you're using
control flow, or alternatively you can decorate your `call()` method with `@tf.function`.
You would encounted an error as follows
```
You would encounter an error as follows:
```python
OperatorNotAllowedInGraphError: Exception encountered when calling MyCustomLayer.call().
Using a symbolic `tf.Tensor` as a Python `bool` is not allowed. You can attempt the
Expand All @@ -265,7 +267,7 @@ def call(self, inputs):
```
The following snippet of code will reproduce the above error:
```
```python
class MyCustomLayer(keras.layers.Layer):
def call(self, inputs):
Expand Down Expand Up @@ -312,7 +314,7 @@ def call(self, inputs):
The error you would encounter would be as follows:
```
```python
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor
is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional
models or Keras Functions. You can only use it as input to a Keras layer or a Keras
Expand All @@ -321,7 +323,7 @@ def call(self, inputs):
The following snippet of code will reproduce the error:
```
```python
input = keras.layers.Input([2, 2, 1])
tf.squeeze(input)
```
Expand All @@ -337,18 +339,20 @@ def call(self, inputs):
keras.ops.squeeze(input)

"""
### Multi output model
Multioutput model's `evaluate()` method does not return individual output losses anymore
-> use the metrics argument in compile to track them
### Multi-output model
The `evaluate()` method of a multioutput model no longer returns individual output
losses separately. Instead, you should utilize the `metrics` argument in the `compile()`
method to keep track of these losses.
When having multiple named outputs (for example named output_a and output_b, old tf.keras
adds <output_a>_loss, <output_b>_loss and so on to metrics. keras_ 3.0 doesn't add them
to metrics and needs to be done them to the output metrics by explicitly providing them
in metrics list of individual outputs.
When dealing with multiple named outputs, such as output_a and output_b, the old
tf.keras would include <output_a>_loss, <output_b>_loss, and similar entries in
metrics. However, in keras_3.0, these entries are not automatically added to metrics.
They must be explicitly provided in the metrics list for each individual output.
The following snippet of code will reproduce the above behavior:
```
```python
from keras.layers import Input, Dense, Flatten, Softmax
# A functional model with multiple outputs
inputs = Input(shape=(10,))
Expand Down Expand Up @@ -393,7 +397,7 @@ def call(self, inputs):
The following snippet of code will show that the tf.Variables are not being tracked.
```
```python
class MyCustomLayer(keras.layers.Layer):
def __init__(self, units):
super().__init__()
Expand Down Expand Up @@ -467,7 +471,7 @@ def call(self, inputs):
The following snippet of code will reproduce the error.
```
```python
class CustomLayer(keras.layers.Layer):
def __init__(self):
super().__init__()
Expand Down Expand Up @@ -517,22 +521,22 @@ def call(self, inputs):
1. Symbolic `add_loss()`: Symbolic `add_loss()` is removed (you can still use
`add_loss()` inside the `call()` method of a layer/model).
2. Locally-connected layers: Locally-connected layers are removed due to low usage. To
use locally-connected layers, copy the layer implementation into your own codebase.
2. Locally connected layers: Locally connected layers are removed due to low usage. To
use locally connected layers, copy the layer implementation into your own codebase.
3. Kernelized layers: Kernelized layers are removed due to low usage. To use kernelized
layers, copy the layer implementation into your own codebase.
4. Removed layer attributes: Layer attributes `metrics`, `dynamic` are removed
4. Removed layer attributes: Layer attributes `metrics`, `dynamic` are removed.
5. RNN layer args: The `constants` and `time_major` arguments in RNN layers are removed.
The `constants` argument was a remnant of Theano and had very low usage. The `time_major`
The `constants` argument was a remnant of Theano and had low usage. The `time_major`
argument was also infrequently used.
6. reset_metrics argument: The reset_metrics argument is removed from `model. *_on_batch`
methods. This argument had very low usage.
6. `reset_metrics` argument: The `reset_metrics` argument is removed from `model. *_on_batch`
methods. This argument had low usage.
7. RadialConstraint: The RadialConstraint constraint object is removed. This object had
very low usage.
low usage.
"""

"""
## Switching Tensorflow code to backend agnostic keras 3.0
## Switching TensorFlow code to backend agnostic Keras 3.0
Keras 3.0 code with the TensorFlow backend will work with native TensorFlow APIs.
However, if you want your code to be backend-agnostic, you will need to replace all of
Expand All @@ -541,19 +545,19 @@ def call(self, inputs):
Follow these instructions to migrate your existing TensorFlow code to Keras 3.0 and run
it with any backend of your choice :
1. Update imports : replace from tensorflow import keras to import keras
1. Update imports : replace `from tensorflow import keras` to `import keras`
2. Update code : replace `tf.keras.*` to `keras.*`. 99% of the tf.keras.* API is
consistent with Keras 3.0. Any differences have been called out in this guide. If an API
is not specifically called out in this guide, that means that the API call is consistent
with tf.keras. If you notice that the same API name results in an error and if it has not
been called out in this document the implementation in keras 3.0 was likely dropped due
to extremely low usage.
been called out in this document the implementation in Keras 3.0 was likely dropped due
to low usage.
3. Replace any `tf.*`, `tf.math*`, `tf.linalg.*`, etc with `keras.ops.*`. Most of the ops
should be consistent with Keras 3.0. If the names are slightly different, they will be
highlighted in this guide. If the same name results in an error and you do not find a
Keras 3.0 equivalent op in this guide, it is likely that the implementation of the op in
keras 3.0 was likely dropped due to extremely low usage.
4. If you are able to replace all the tf ops with keras, you can remove the tensorflow
keras 3.0 was likely dropped due to low usage.
4. If you are able to replace all the tf ops with Keras, you can remove the `tensorflow`
import and run your code with a backend of your choice.
"""

Expand Down Expand Up @@ -589,7 +593,7 @@ def call(self, inputs):
should be consistent with Keras 3.0. If the names are slightly different, they will be
highlighted in this guide. If the same name results in an error and you do not find a
Keras 3.0 equivalent op in this guide, it is likely that the implementation of the op in
keras 3.0 was likely dropped due to extremely low usage.
keras 3.0 was likely dropped due to low usage.
### Numpy ops
Expand Down

0 comments on commit a1a11c1

Please sign in to comment.