diff --git a/docs/markdown/eddl_progress.md b/docs/markdown/eddl_progress.md index b26b3310e..a5decb412 100644 --- a/docs/markdown/eddl_progress.md +++ b/docs/markdown/eddl_progress.md @@ -19,7 +19,7 @@ | Input | ✔️ | ✔️ | ✔️ | Used to instantiate a EDDL tensor. | | Reshape | ✔️ | ✔️ | ✔️ | Reshapes an output to a certain shape. | | Permute | ✔️ | ✔️ | ✔️ | Permutes the dimensions of the input according to a given pattern. | -| Embedding | ❌️ | ❌️ | ❌️ | Turns positive integers (indexes) into dense vectors of fixed size; (also known as mapping). e.g. `[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]` | +| Embedding | ✔️ | ✔️ | ❌️ | Turns positive integers (indexes) into dense vectors of fixed size; (also known as mapping). e.g. `[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]` | | Transpose | ✔️ | ✔️ | ❌️ | Permute the last two dimensions | @@ -174,8 +174,8 @@ Apply data transformations with random parametrization. | Functionality | CPU | GPU | ONNX | Comments | | ------------- |------| -----| ------|---------| | GRU | ❌️ | ❌️ | ❌️ | Gated Recurrent Unit - Cho et al. 2014. | -| LSTM | ❌️ | ❌️ | ❌️ | Long Short-Term Memory layer - Hochreiter 1997. | -| RNN | ❌️ | ❌️ | ❌️ | Fully-connected RNN where the output is to be fed back to input. | +| LSTM | ✔️ | ✔️ | ❌️ | Long Short-Term Memory layer - Hochreiter 1997. | +| RNN | ✔️ | ✔️ | ❌️ | Fully-connected RNN where the output is to be fed back to input. | ## Regularizer layers @@ -227,6 +227,7 @@ Apply data transformations with random parametrization. | MSLE | ❌️ | ❌️ | Mean Squared Logarithmic Error | | Min | ✔️ | ✔️ | Minimum Error | | Hinge | ❌️ | ❌️ | Hinge Error | +| Dice | ✔️ | ✔️ | Dice loss | | SoftCrossEntropy | ✔️ | ✔️ | Soft-Categorical Cross-Entropy Error | @@ -241,6 +242,7 @@ Apply data transformations with random parametrization. | MAE | ✔️ | ✔️ | Mean Absolute Error | | MRE | ✔️ | ✔️ | Mean Relative Error | | Sum | ✔️ | ✔️ | Sum Error | +| Dice | ✔️ | ✔️ | Dice error | # Optimizers