Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding ReadMe for Layer by Layer Debug Targets #2305

Merged
merged 9 commits into from
Nov 28, 2023
Merged
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions debugging_output.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
The TFLM debugging output tools allow TFLM users to easily debug their models
turbotoribio marked this conversation as resolved.
Show resolved Hide resolved
by providing a tool that will compare the intermediate values(output of each OP/Kernel)
from a model post invoke between the TFLM and TfLite. As well as a way to
compare intermediate values between TFLM x86 implementations and Optimized
Implementations.

turbotoribio marked this conversation as resolved.
Show resolved Hide resolved

The workflow can be divided into two parts:
turbotoribio marked this conversation as resolved.
Show resolved Hide resolved

The first is a C++ binary that takes a TfLite model and returns a file that has
random inputs and their corresponding output values for each layer of the model
it was provided.

The second is a python script that takes a TfLite model and optionally can take
the file outputted by the C++ binary mentioned above. When only the TfLite
model is provided as input, the script generates random input and compares
TFLM vs TfLite inference outputs for each layer of the model. When the file
from the C++ binary is provided alongside the TfLite model as inputs , the
script runs TFLM x86 inference comparison to the expected output.


turbotoribio marked this conversation as resolved.
Show resolved Hide resolved




# C++ Excpected Layer by Layer Output Tool on TFLite Micro
turbotoribio marked this conversation as resolved.
Show resolved Hide resolved

This C++ binary allows you to pass in a TfLite model and returns a flatbuffer
file with input and the corresponding output values appended into it that can be
passed into a python debugging tool which can compare those golden values vs
the x86 TFLM reference kernel implementation.

The C++ Tool/binary will write a debugging file to the path provide in
2nd arg using the tflite_model provided in the 1st arg

##### Command bazel/blaze:

```
bazel run tensorflow/lite/micro/tools:layer_cc -- \
</path/to/input_model.tflite>
</path/to/output.file_name>
```

##### How to Build using Makefile :

```
make -f tensorflow/lite/micro/tools/make/Makefile layer_by_layer_output_tool -j24
```

# Python Layer by Layer Debugging Tool

The Python Tool/Script can first be used to comapre TFLM vs Tflite outputs for
turbotoribio marked this conversation as resolved.
Show resolved Hide resolved
random inputs by only providing a TfLite file.

#### TfLite vs TFLM command:
```
bazel run tensorflow/lite/micro/tools:layer_by_layer_debugger -- \
--input_tflite_file=</path/to/my_model.tflite>
```

The Python Tool/Script can also be used to compare, TFLM's python x86 output
turbotoribio marked this conversation as resolved.
Show resolved Hide resolved
vs expected output provided by the C++ Tool/binary.

#### TFLM vs Expected Command:
```
bazel run tensorflow/lite/micro/tools:layer_by_layer_debugger -- \
--input_tflite_file=</path/to/my_model.tflite> \
--layer_by_layer_data_file=</path/to/my_debug_flatbuffer_file>
```

#### Optional Flags:
` --print_dump `
When this flag is set, it will print the TFLM output for each layer that is
compared.

` --rng`
Integer random number seed for generating input data for comparisons against TFLite. (Default: 42)
Loading