Skip to content

Commit

Permalink
Merge pull request #530 from ModECI/development
Browse files Browse the repository at this point in the history
Updated to v0.4.10; regenerated docs and example files
  • Loading branch information
pgleeson authored Apr 29, 2024
2 parents 5fb8305 + abda25b commit 38ccbc9
Show file tree
Hide file tree
Showing 90 changed files with 227,754 additions and 227,660 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -310,3 +310,4 @@ examples/TensorFlow/Keras/keras_to_MDF
/checkout_pngs.sh
/examples/TensorFlow/Keras/IRIS/keras_model_on_iris.keras
/examples/TensorFlow/Keras/MNIST/kr_N_model.keras
/checkoutpngs.sh
28 changes: 26 additions & 2 deletions docs/MDF_function_specifications.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,30 @@
],
"expression_string": "A * (A > 0)"
},
"arccos": {
"description": "Inverse cosine function",
"arguments": [
"variable0",
"scale"
],
"expression_string": "scale * arccos(variable0)"
},
"arcsin": {
"description": "Inverse sine function",
"arguments": [
"variable0",
"scale"
],
"expression_string": "scale * arcsin(variable0)"
},
"arctan": {
"description": "Inverse tangent function",
"arguments": [
"variable0",
"scale"
],
"expression_string": "scale * arctan(variable0)"
},
"change_goal": {
"description": "Modifies the current goal buffer using the given pattern.",
"arguments": [
Expand Down Expand Up @@ -687,7 +711,7 @@
"expression_string": "onnx_ops.max(data_0)"
},
"onnx::MaxPool": {
"description": "\n MaxPool consumes an input tensor X and applies max pooling across\n the tensor according to kernel sizes, stride sizes, and pad lengths.\n max pooling consisting of computing the max on all values of a\n subset of the input tensor according to the kernel size and downsampling the\n data into the output tensor Y for further processing. The output spatial shape is calculated differently\n depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized.\n With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)\n ```\n if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.\n\n `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):\n ```\n VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1\n ```\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window is maximum number of elements exclude pad. \n ",
"description": "\n MaxPool consumes an input tensor X and applies max pooling across\n the tensor according to kernel sizes, stride sizes, and pad lengths.\n max pooling consisting of computing the max on all values of a\n subset of the input tensor according to the kernel size and downsampling the\n data into the output tensor Y for further processing. The output spatial shape is calculated differently\n depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized.\n With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)\n ```\n if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`. Sliding windows that would start in the right padded region are ignored.\n\n `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):\n ```\n VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1\n ```\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window is maximum number of elements exclude pad. \n ",
"arguments": [
"X"
],
Expand Down Expand Up @@ -1335,7 +1359,7 @@
"expression_string": "onnx_ops.tile(input, repeats)"
},
"onnx::TopK": {
"description": "\nRetrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of\nshape [a_1, a_2, ..., a_n, r] and integer argument k, return two outputs:\n\n* Value tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n]\n which contains the values of the top k elements along the specified axis\n* Index tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which\n contains the indices of the top k elements (original indices from the input\n tensor).\n\n* If \"largest\" is 1 (the default value) then the k largest elements are returned.\n* If \"sorted\" is 1 (the default value) then the resulting k elements will be sorted.\n* If \"sorted\" is 0, order of returned 'Values' and 'Indices' are undefined.\n\nGiven two equivalent values, this operator uses the indices along the axis as\na tiebreaker. That is, the element with the lower index will appear first.\n",
"description": "\nRetrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of\nshape [a_0, a_1, ..., a_{n-1}] and integer argument k, return two outputs:\n\n* Value tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}]\n which contains the values of the top k elements along the specified axis\n* Index tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}] which\n contains the indices of the top k elements (original indices from the input\n tensor).\n\n* If \"largest\" is 1 (the default value) then the k largest elements are returned.\n* If \"sorted\" is 1 (the default value) then the resulting k elements will be sorted.\n* If \"sorted\" is 0, order of returned 'Values' and 'Indices' are undefined.\n\nGiven two equivalent values, this operator uses the indices along the axis as\na tiebreaker. That is, the element with the lower index will appear first.\n",
"arguments": [
"X",
"K"
Expand Down
49 changes: 38 additions & 11 deletions docs/MDF_function_specifications.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ These functions are defined in Python API module <a href="https://github.com/Mod

- <a href="#matmul">MatMul</a>
- <a href="#relu">Relu</a>
- <a href="#arccos">arccos</a>
- <a href="#arcsin">arcsin</a>
- <a href="#arctan">arctan</a>
- <a href="#changegoal">change_goal</a>
- <a href="#checktermination">check_termination</a>
- <a href="#chunktostring">chunk_to_string</a>
Expand Down Expand Up @@ -211,6 +214,30 @@ Python version: `A @ B`

Python version: `A * (A > 0)`

<a name="arccos"></a>

## arccos
<p><i>Inverse cosine function</i></p>
<p><b>arccos(variable0, scale)</b> = scale * arccos(variable0)</p>

Python version: `scale * numpy.arccos(variable0)`

<a name="arcsin"></a>

## arcsin
<p><i>Inverse sine function</i></p>
<p><b>arcsin(variable0, scale)</b> = scale * arcsin(variable0)</p>

Python version: `scale * numpy.arcsin(variable0)`

<a name="arctan"></a>

## arctan
<p><i>Inverse tangent function</i></p>
<p><b>arctan(variable0, scale)</b> = scale * arctan(variable0)</p>

Python version: `scale * numpy.arctan(variable0)`

<a name="changegoal"></a>

## change_goal
Expand Down Expand Up @@ -320,7 +347,7 @@ Python version: `onnx_ops.abs(X)`
Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise.
</i></p>

Python version: `onnx_ops.anumpy.cos(input)`
Python version: `onnx_ops.acos(input)`

<a href=https://onnx.ai/onnx/operators/onnx__Acos.html><i>ONNX Documentation</i></a>
<a name="acosh"></a>
Expand All @@ -330,7 +357,7 @@ Python version: `onnx_ops.anumpy.cos(input)`
Calculates the hyperbolic arccosine of the given input tensor element-wise.
</i></p>

Python version: `onnx_ops.anumpy.cosh(input)`
Python version: `onnx_ops.acosh(input)`

<a href=https://onnx.ai/onnx/operators/onnx__Acosh.html><i>ONNX Documentation</i></a>
<a name="add"></a>
Expand Down Expand Up @@ -397,7 +424,7 @@ Python version: `onnx_ops.argmin(data, axis, keepdims, select_last_index)`
Calculates the arcsine (inverse of sine) of the given input tensor, element-wise.
</i></p>

Python version: `onnx_ops.anumpy.sin(input)`
Python version: `onnx_ops.asin(input)`

<a href=https://onnx.ai/onnx/operators/onnx__Asin.html><i>ONNX Documentation</i></a>
<a name="asinh"></a>
Expand All @@ -407,7 +434,7 @@ Python version: `onnx_ops.anumpy.sin(input)`
Calculates the hyperbolic arcsine of the given input tensor element-wise.
</i></p>

Python version: `onnx_ops.anumpy.sinh(input)`
Python version: `onnx_ops.asinh(input)`

<a href=https://onnx.ai/onnx/operators/onnx__Asinh.html><i>ONNX Documentation</i></a>
<a name="atan"></a>
Expand All @@ -417,7 +444,7 @@ Python version: `onnx_ops.anumpy.sinh(input)`
Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise.
</i></p>

Python version: `onnx_ops.anumpy.tan(input)`
Python version: `onnx_ops.atan(input)`

<a href=https://onnx.ai/onnx/operators/onnx__Atan.html><i>ONNX Documentation</i></a>
<a name="atanh"></a>
Expand All @@ -427,7 +454,7 @@ Python version: `onnx_ops.anumpy.tan(input)`
Calculates the hyperbolic arctangent of the given input tensor element-wise.
</i></p>

Python version: `onnx_ops.anumpy.tanh(input)`
Python version: `onnx_ops.atanh(input)`

<a href=https://onnx.ai/onnx/operators/onnx__Atanh.html><i>ONNX Documentation</i></a>
<a name="averagepool"></a>
Expand Down Expand Up @@ -1705,7 +1732,7 @@ Python version: `onnx_ops.max(data_0)`
```
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.
if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`. Sliding windows that would start in the right padded region are ignored.

`auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
Expand Down Expand Up @@ -2464,7 +2491,7 @@ valid. Reduction over an empty set of values yields minus infinity (if supported
The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
to `False` instead of `True`.</i></p>

Python version: `onnx_ops.reducelogsumnumpy.exp(data, axes, keepdims)`
Python version: `onnx_ops.reducelogsumexp(data, axes, keepdims)`

<a href=https://onnx.ai/onnx/operators/onnx__ReduceLogSumExp.html><i>ONNX Documentation</i></a>
<a name="reducemax"></a>
Expand Down Expand Up @@ -3431,11 +3458,11 @@ Python version: `onnx_ops.tile(input, repeats)`
## TopK
<p><i>
Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of
shape [a_1, a_2, ..., a_n, r] and integer argument k, return two outputs:
shape [a_0, a_1, ..., a_{n-1}] and integer argument k, return two outputs:

* Value tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n]
* Value tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}]
which contains the values of the top k elements along the specified axis
* Index tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which
* Index tensor of shape [a_0, a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}] which
contains the indices of the top k elements (original indices from the input
tensor).

Expand Down
45 changes: 32 additions & 13 deletions docs/MDF_function_specifications.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,24 @@ Relu:
arguments:
- A
expression_string: A * (A > 0)
arccos:
description: Inverse cosine function
arguments:
- variable0
- scale
expression_string: scale * arccos(variable0)
arcsin:
description: Inverse sine function
arguments:
- variable0
- scale
expression_string: scale * arcsin(variable0)
arctan:
description: Inverse tangent function
arguments:
- variable0
- scale
expression_string: scale * arctan(variable0)
change_goal:
description: Modifies the current goal buffer using the given pattern.
arguments:
Expand Down Expand Up @@ -1650,7 +1668,8 @@ onnx::MaxPool:
\ 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i]\
\ + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i]\
\ + 1)\n ```\n if ceil_mode is enabled. `pad_shape[i]` is the sum of pads\
\ along axis `i`.\n\n `auto_pad` is a DEPRECATED attribute. If you are using\
\ along axis `i`. Sliding windows that would start in the right padded region\
\ are ignored.\n\n `auto_pad` is a DEPRECATED attribute. If you are using\
\ them currently, the output spatial shape will be following when ceil_mode\
\ is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]\
\ - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n\
Expand Down Expand Up @@ -3461,18 +3480,18 @@ onnx::Tile:
expression_string: onnx_ops.tile(input, repeats)
onnx::TopK:
description: "\nRetrieve the top-K largest or smallest elements along a specified\
\ axis. Given an input tensor of\nshape [a_1, a_2, ..., a_n, r] and integer\
\ argument k, return two outputs:\n\n* Value tensor of shape [a_1, a_2, ...,\
\ a_{axis-1}, k, a_{axis+1}, ... a_n]\n which contains the values of the\
\ top k elements along the specified axis\n* Index tensor of shape [a_1, a_2,\
\ ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which\n contains the indices of\
\ the top k elements (original indices from the input\n tensor).\n\n* If\
\ \"largest\" is 1 (the default value) then the k largest elements are returned.\n\
* If \"sorted\" is 1 (the default value) then the resulting k elements will\
\ be sorted.\n* If \"sorted\" is 0, order of returned 'Values' and 'Indices'\
\ are undefined.\n\nGiven two equivalent values, this operator uses the indices\
\ along the axis as\na tiebreaker. That is, the element with the lower index\
\ will appear first.\n"
\ axis. Given an input tensor of\nshape [a_0, a_1, ..., a_{n-1}] and integer\
\ argument k, return two outputs:\n\n* Value tensor of shape [a_0, a_1, ...,\
\ a_{axis-1}, k, a_{axis+1}, ... a_{n-1}]\n which contains the values of\
\ the top k elements along the specified axis\n* Index tensor of shape [a_0,\
\ a_1, ..., a_{axis-1}, k, a_{axis+1}, ... a_{n-1}] which\n contains the\
\ indices of the top k elements (original indices from the input\n tensor).\n\
\n* If \"largest\" is 1 (the default value) then the k largest elements are\
\ returned.\n* If \"sorted\" is 1 (the default value) then the resulting k\
\ elements will be sorted.\n* If \"sorted\" is 0, order of returned 'Values'\
\ and 'Indices' are undefined.\n\nGiven two equivalent values, this operator\
\ uses the indices along the axis as\na tiebreaker. That is, the element with\
\ the lower index will appear first.\n"
arguments:
- X
- K
Expand Down
2 changes: 1 addition & 1 deletion docs/MDF_specification.json
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
},
"function": {
"type": "Union[str, NoneType]",
"description": "Which of the in-build MDF functions (linear etc.) this uses, See"
"description": "Which of the in-build MDF functions (linear etc.) this uses, See\nhttps://mdf.readthedocs.io/en/latest/api/MDF_function_specifications.html"
},
"args": {
"type": "Union[Any, NoneType]",
Expand Down
4 changes: 3 additions & 1 deletion docs/MDF_specification.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,10 @@ specification:
of time are seconds.
function:
type: Union[str, NoneType]
description: Which of the in-build MDF functions (linear etc.) this
description: 'Which of the in-build MDF functions (linear etc.) this
uses, See
https://mdf.readthedocs.io/en/latest/api/MDF_function_specifications.html'
args:
type: Union[Any, NoneType]
description: 'Dictionary of values for each of the arguments for the
Expand Down
3 changes: 2 additions & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,8 @@ inbuilt <b>function</b> with <b>args</b>; 4) or change from a <b>default_initial
<tr>
<td><b>function</b></td>
<td>Union[str, NoneType]</td>
<td><i>Which of the in-build MDF functions (linear etc.) this uses, See</i></td>
<td><i>Which of the in-build MDF functions (linear etc.) this uses, See
https://mdf.readthedocs.io/en/latest/api/MDF<a href="#function">function</a>specifications.html</i></td>
</tr>


Expand Down
Loading

0 comments on commit 38ccbc9

Please sign in to comment.