Skip to content

Commit

Permalink
Auto-generating sphinx docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Sep 7, 2024
1 parent 4c58433 commit 45f3a1a
Show file tree
Hide file tree
Showing 7 changed files with 35 additions and 35 deletions.
26 changes: 13 additions & 13 deletions main/_sources/tutorials/export-to-executorch-tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -585,27 +585,27 @@ there is only one program, it will by default be saved to the name "forward".

.. code-block:: none
Edge Dialect graph of encode
Edge Dialect graph of decode
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[1, 10]"):
# File: /pytorch/executorch/docs/source/tutorials_source/export-to-executorch-tutorial.py:261 in forward, code: return torch.nn.functional.linear(x, torch.randn(5, 10))
aten_randn_default: "f32[5, 10]" = executorch_exir_dialects_edge__ops_aten_randn_default([5, 10], device = device(type='cpu'), pin_memory = False)
aten_permute_copy_default: "f32[10, 5]" = executorch_exir_dialects_edge__ops_aten_permute_copy_default(aten_randn_default, [1, 0]); aten_randn_default = None
aten_mm_default: "f32[1, 5]" = executorch_exir_dialects_edge__ops_aten_mm_default(x, aten_permute_copy_default); x = aten_permute_copy_default = None
def forward(self, x: "f32[1, 5]"):
# File: /pytorch/executorch/docs/source/tutorials_source/export-to-executorch-tutorial.py:266 in forward, code: return torch.nn.functional.linear(x, torch.randn(10, 5))
aten_randn_default: "f32[10, 5]" = executorch_exir_dialects_edge__ops_aten_randn_default([10, 5], device = device(type='cpu'), pin_memory = False)
aten_permute_copy_default: "f32[5, 10]" = executorch_exir_dialects_edge__ops_aten_permute_copy_default(aten_randn_default, [1, 0]); aten_randn_default = None
aten_mm_default: "f32[1, 10]" = executorch_exir_dialects_edge__ops_aten_mm_default(x, aten_permute_copy_default); x = aten_permute_copy_default = None
return (aten_mm_default,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='aten_mm_default'), target=None)])
Range constraints: {}
Edge Dialect graph of decode
Edge Dialect graph of encode
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[1, 5]"):
# File: /pytorch/executorch/docs/source/tutorials_source/export-to-executorch-tutorial.py:266 in forward, code: return torch.nn.functional.linear(x, torch.randn(10, 5))
aten_randn_default: "f32[10, 5]" = executorch_exir_dialects_edge__ops_aten_randn_default([10, 5], device = device(type='cpu'), pin_memory = False)
aten_permute_copy_default: "f32[5, 10]" = executorch_exir_dialects_edge__ops_aten_permute_copy_default(aten_randn_default, [1, 0]); aten_randn_default = None
aten_mm_default: "f32[1, 10]" = executorch_exir_dialects_edge__ops_aten_mm_default(x, aten_permute_copy_default); x = aten_permute_copy_default = None
def forward(self, x: "f32[1, 10]"):
# File: /pytorch/executorch/docs/source/tutorials_source/export-to-executorch-tutorial.py:261 in forward, code: return torch.nn.functional.linear(x, torch.randn(5, 10))
aten_randn_default: "f32[5, 10]" = executorch_exir_dialects_edge__ops_aten_randn_default([5, 10], device = device(type='cpu'), pin_memory = False)
aten_permute_copy_default: "f32[10, 5]" = executorch_exir_dialects_edge__ops_aten_permute_copy_default(aten_randn_default, [1, 0]); aten_randn_default = None
aten_mm_default: "f32[1, 5]" = executorch_exir_dialects_edge__ops_aten_mm_default(x, aten_permute_copy_default); x = aten_permute_copy_default = None
return (aten_mm_default,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='aten_mm_default'), target=None)])
Expand Down Expand Up @@ -1278,7 +1278,7 @@ Links Mentioned

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 1.534 seconds)
**Total running time of the script:** (0 minutes 1.464 seconds)


.. _sphx_glr_download_tutorials_export-to-executorch-tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion main/_sources/tutorials/sdk-integration-tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -405,7 +405,7 @@ Links Mentioned

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 1.217 seconds)
**Total running time of the script:** (0 minutes 1.419 seconds)


.. _sphx_glr_download_tutorials_sdk-integration-tutorial.py:
Expand Down
6 changes: 3 additions & 3 deletions main/_sources/tutorials/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@

Computation times
=================
**00:02.751** total execution time for **tutorials** files:
**00:02.883** total execution time for **tutorials** files:

+---------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_export-to-executorch-tutorial.py` (``export-to-executorch-tutorial.py``) | 00:01.534 | 0.0 MB |
| :ref:`sphx_glr_tutorials_export-to-executorch-tutorial.py` (``export-to-executorch-tutorial.py``) | 00:01.464 | 0.0 MB |
+---------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_sdk-integration-tutorial.py` (``sdk-integration-tutorial.py``) | 00:01.217 | 0.0 MB |
| :ref:`sphx_glr_tutorials_sdk-integration-tutorial.py` (``sdk-integration-tutorial.py``) | 00:01.419 | 0.0 MB |
+---------------------------------------------------------------------------------------------------+-----------+--------+
2 changes: 1 addition & 1 deletion main/searchindex.js

Large diffs are not rendered by default.

Loading

0 comments on commit 45f3a1a

Please sign in to comment.