From 3d765e626277b899531dc164fba7f93c24e22f4f Mon Sep 17 00:00:00 2001 From: ddavis-2015 Date: Fri, 18 Oct 2024 18:01:21 -0700 Subject: [PATCH] Squashed commit of the following: commit eaee851be6bce4deece95f0e3722a80fdc46ec9f Author: ddavis-2015 Date: Fri Oct 18 17:48:48 2024 -0700 Squashed commit of the following: commit 48942656642b9ecd79a809babbd6bf6f00e62c94 Author: ddavis-2015 Date: Fri Oct 18 17:48:05 2024 -0700 pre-merge empty commit commit a110e412971aecaac50a635c94e4fcaf7d7f0397 Author: ddavis-2015 Date: Fri Oct 18 16:17:13 2024 -0700 fix C++ bitwidth 6 & 7 decompression commit efedcc2d509631f1a745941788c78deb81b70001 Author: ddavis-2015 Date: Fri Oct 18 10:18:50 2024 -0700 working decompression unit test commit 81ecf2e878d07a165f16b9dff53d6fcaac1c35e8 Author: ddavis-2015 Date: Thu Oct 17 18:17:06 2024 -0700 decompression unit test improvements commit b318421f63f954c2e983b223e90f1c86ea2d9022 Author: ddavis-2015 Date: Wed Oct 16 17:34:09 2024 -0700 add decompression unit test commit 9bb2b6312c84f1ce7855f48abb23ec4f2daa50d0 Author: ddavis-2015 Date: Sun Oct 13 18:34:01 2024 -0700 cleanup commit 77bb05deca05c0c7e7002c3983f61767ad16691f Author: ddavis-2015 Date: Sun Oct 13 18:29:33 2024 -0700 align compressed tensor data as per schema commit ad2b1c3b29cdd1498c64adb952930127c6d70309 Author: ddavis-2015 Date: Sat Oct 12 22:35:54 2024 -0700 reduce HIFI5 decompression code size commit 99c6e3558271d991b194ada48d2f1ce200c7d573 Author: ddavis-2015 Date: Fri Oct 11 14:02:58 2024 -0700 revert to original Cadence bit width 4 code commit 2388549c478a0eed4c93604de73d1743618d03ed Author: ddavis-2015 Date: Thu Oct 10 17:50:29 2024 -0700 refactor decompression code into reference and platform specific Apply some Xtensa acceleration code changes commit b84853cb9830efb89aaacb8b6b5fc0a1c608a7e0 Author: ddavis-2015 Date: Tue Oct 8 16:08:55 2024 -0700 testing commit c107f4288e098ed658b945686f7f139c0addaa4e Author: Ryan Kuester Date: Thu Oct 17 14:31:03 2024 -0500 refactor: move misplaced TF_LITE_REMOVE_VIRTUAL_DELETEs to private: Move several TF_LITE_REMOVE_VIRTUAL_DELETE declarations that are wrongly in a public section of their classes. To have the intended effect, as documented in t/l/m/compatibility.h, these must be in a private section. commit 7b3a2bd1d250ece64d33fd0e407d34a405d9a9f1 Author: Ryan Kuester Date: Thu Oct 17 12:36:46 2024 -0500 build(bazel): always build with TF_LITE_STATIC_MEMORY Add TF_LITE_STATIC_MEMORY to the defines set globally for TFLM builds in Bazel. TFLM always builds with this set in Make, and it appears to have been an oversight that it wasn't set during Bazel builds. Not having it set in Bazel caused some unit tests to pass under Bazel that failed under Make. At the same time, add -fno-exceptions. This flag is also always set in Make builds. Without it, setting TF_LITE_STATIC_MEMORY breaks the build. TF_LITE_STATIC_MEMORY triggers TF_LITE_REMOVE_VIRTUAL_DELETE in t/l/m/compatibility.h, which makes operator delete private in certain classes. When exceptions are enabled, a placement new with those classes is allowed to throw an exception, and operator delete is implicitly called during the unwind. The build breaks because operator delete can't be called if it's private. Disabling exceptions eliminates the unwind code that calls operator delete implicitly, and thus the build succeeds. In any case, -fno-exceptions should have been used in Bazel builds, matching the flags used in Make and the no-exceptions design requirement of the TFLM project. commit 1eb4e0db0813aa0ff1aa37e63830d4018bdeaced Author: Ryan Kuester Date: Thu Oct 17 11:05:45 2024 -0500 feat(python): don't check .sparsity in interpreter Remove the check for sparse tensors in the Python interpreter wrapper. This fixes a broken build when TF_LITE_STATIC_MEMORY is set, which should always be the case in TFLM. TfLiteTensor objects don't have a .sparsity member when TF_LITE_STATIC_MEMORY is set. This prepares for an upcoming commit setting TF_LITE_STATIC_MEMORY during Bazel builds. This hasn't caused build failures in Make builds, which have always set TF_LITE_STATIC_MEMORY, because Make builds don't build the Python interpreter wrapper. commit 7217095888c1d32a132fd0b4d34379b2a5791d7f Author: Ryan Kuester Date: Wed Oct 16 14:03:25 2024 -0500 fix(memory_arena_threshold): with TF_LITE_STATIC_MEMORY Fix the broken build due to redefinition of the threshold when TF_LITE_STATIC_MEMORY is set. Apparently this case isn't triggered in any Bazel test, only in Make. Simplify the threshold specification by only depending on whether compression is enabled and not also on whether TF_LITE_STATIC_MEMORY is in use. commit 8e4e55e1773beef7dbc0562e2d40aef8a0f0968a Author: Ryan Kuester Date: Thu Oct 10 12:38:03 2024 -0500 build(bazel): disable codegen when building --//:with_compression The codegen prototype code is not compatible with the changes which implement model compression made to the core TFLM components. For now, disable codegen targets when building with compression enabled. commit 884a234ba6cb21f11cc08b170792a9676d7a712e Author: Ryan Kuester Date: Tue Oct 15 18:31:01 2024 -0500 build(bazel): compile in compression when --//:with_compression Conditionally compile in support for compressed tensors when the option --//:with_compression is given. commit a1d459bb4b56c1d931a8b6bb10336001867a4a6d Author: Ryan Kuester Date: Thu Oct 10 12:28:39 2024 -0500 build(bazel): add --//with_compression build setting Add a --//with_compression user-defined build setting and a corresponding configuration setting. commit 4edc56419485660a673cd045dc521b847260283f Author: Ryan Kuester Date: Thu Oct 10 12:24:53 2024 -0500 build(bazel): fix compression-related dependencies of micro_allocator commit a52f97f86f32ba5c323c2f67f7f075ea50631ba2 Author: Ryan Kuester Date: Tue Oct 15 17:28:09 2024 -0500 build(bazel): replace cc_* with tflm_cc_* in remaining TFLM code Replace cc_* targets remaining in TFLM code with tflm_cc_* targets. These are targets which did not formerly use the common copts. Avoid changing imported TFLite code, if for no other reason than to avoid merge conflicts during the automatic sync with upstream TFLite. commit a6368f439e05d8c6a2a6a1457cdb6f47113b79ad Author: Ryan Kuester Date: Fri Oct 11 16:08:34 2024 -0500 build(bazel): introduce tflm_cc_* macros, refactoring away micro_copts Remove micro_copts() by replacing every cc_* target that used them with a tflm_cc_* equivalent, and setting those common copts in one place, inside the tflm_cc_* macro. This is the first of several commits introducing tflm_cc_* macros in place of cc_binary, cc_library, and cc_test. Motivated by the upcoming need to support conditional compilation, the objective is to centralize build configuration rather than requiring (and remembering that) each cc_* target in the project add the same common attributes such as compiler options and select()ed #defines. Alternatives such as setting global options on the command line or in .bazelrc, even if simplified with a --config option, fail to preserve flags and hooks for configuration in the case TFLM is used as an external repository by an application project. Nor is it easy in that case for individual targets to override an otherwise global setting. commit 1518422f151371e285fcc6a5b62a46f7659b3624 Author: Ryan Kuester Date: Thu Oct 10 23:56:49 2024 -0500 chore: remove obsolete ci/temp_patches Remove ci/temp_patches, which was obsoleted in 23f608fd once it was no longer used by the sync script. It should have been deleted then. Remove it not only to clean up dead code, but because it contains a reference to `micro_copts`, which is about to be refactored away, and we don't want to leave stray references to it in the tree. commit 18ef08063e27cf64c52e8e47069b24f0dd1b3405 Author: Ryan Kuester Date: Tue Oct 8 17:58:12 2024 -0500 refactor: use metadata_saved.h instead of metadata_generated.h Use the generated file metadata_saved.h instead of metadata_generated.h for the reasons explained in t/l/m/compression/BUILD:metadata_saved. Delete metadata_generated.h from the source tree as it is not maintained. commit 5a02e302dcc44988a9a5bd78f364329ec93488e9 Author: Ryan Kuester Date: Thu Oct 10 13:46:46 2024 -0500 test(memory_arena_threshold): adjust expected value with compression Fix a test failure by setting a different expected value for the persistent buffer allocation when compression is configured in. The allocation was allowed to vary by 3%; however, compression adds ~10%. Set the expected value to the measured value when compression is configured in. commit 01bc582ba46a384a5c645ab3851128a0232d3d6b Author: Ryan Kuester Date: Thu Oct 10 13:35:10 2024 -0500 test(memory_arena_threshold): don't expect exact allocation values Remove the check for allocation sizes to exactly match expected values. This check immediately followed--and thus rendered pointless---a check that sizes are within a certain percentage, which seems to be the true intent of the test. commit e0aae773e300d27ac8b163832a8e4a58bfd8e082 Merge: e3280294657 e86d97b6237 Author: Ryan Kuester Date: Wed Oct 16 13:39:56 2024 -0500 Merge branch 'main' into compress-testing commit e3280294657932e83475e6b30f77d0f7ad390061 Author: Ryan Kuester Date: Mon Oct 7 12:52:23 2024 -0500 build(bazel): fix dependencies in work-in-progress compression code In the Bazel build, add dependencies needed by the code added to t/l/m:micro_context for decompression. The Bazel build with or without compression was broken without this. commit e86d97b6237f88ab5925c0b41e3e3589a1560d86 Author: RJ Ascani Date: Mon Oct 7 10:36:26 2024 -0700 Replace rascani with suleshahid on OWNERS (#2715) BUG=none commit b77342889e665d505a7e42f913fa05c2c92c848f Author: Ryan Kuester Date: Fri Oct 4 09:59:10 2024 -0500 feat(compression): add work-in-progress compression and viewer tools commit f6bd4861da99afacc51559e1cb4e49995fa50bf3 Merge: 487c17ab6de e3f6dc1ff1e Author: Ryan Kuester Date: Fri Oct 4 09:36:24 2024 -0500 Merge branch 'main' into compress-prerelease commit e3f6dc1ff1ea59d8f1a98c97456c349b1ad83124 Author: David Davis Date: Thu Oct 3 10:45:00 2024 -0700 Compression documentation (#2711) @tensorflow/micro Add documentation describing some compression/decompression internals and makefile build procedures. bug=#2710 commit b3967a901eedcf3128af48b4bb1421c274707f7e Author: Ryan Kuester Date: Wed Oct 2 13:36:01 2024 -0500 style: add .style.yapf to control yapf styling of Python code (#2709) Add a .style.yapf file so yapf can be used to style Python code without passing the project's style via command line option. Remove the corresponding patch to pigweed's call to yapf, used by CI, and instead let it too rely on .style.yapf. Remove the developer documentation's instruction to use the command line option. BUG=description commit d24957732bbc757e444a710191ad605628201fff Author: Ryan Kuester Date: Tue Oct 1 16:16:45 2024 -0500 build(codegen): suppress noise in console output (#2708) Add a --quiet option to the code_generator binary so that when it's used within the build system, it doesn't print unexpected, distracting noise to the console. Generally, compiler or generator commands don't print output unless there's an error. BUG=description --- .github/workflows/sync.yml | 2 +- .style.yapf | 3 + BUILD | 14 ++ CODEOWNERS | 4 +- CONTRIBUTING.md | 2 +- ci/temp_patches/tf_update_visibility.patch | 34 --- codegen/build_def.bzl | 11 +- codegen/code_generator.py | 22 +- codegen/inference_generator.py | 1 - codegen/runtime/BUILD | 5 +- python/tflite_micro/BUILD | 5 +- python/tflite_micro/interpreter_wrapper.cc | 5 - signal/micro/kernels/BUILD | 5 +- tensorflow/compiler/mlir/lite/core/api/BUILD | 9 +- tensorflow/lite/core/api/BUILD | 13 +- .../lite/experimental/microfrontend/lib/BUILD | 12 +- tensorflow/lite/kernels/BUILD | 9 +- tensorflow/lite/kernels/internal/BUILD | 9 +- tensorflow/lite/micro/BUILD | 148 ++++++------ tensorflow/lite/micro/arena_allocator/BUILD | 29 +-- .../non_persistent_arena_buffer_allocator.h | 4 +- .../persistent_arena_buffer_allocator.h | 3 +- .../single_arena_buffer_allocator.h | 4 +- tensorflow/lite/micro/benchmarks/BUILD | 18 +- tensorflow/lite/micro/build_def.bzl | 40 ++- tensorflow/lite/micro/compression/BUILD | 31 +-- tensorflow/lite/micro/compression/compress.py | 8 + .../micro/compression/metadata_generated.h | 228 ------------------ .../lite/micro/compression/test_models.py | 5 +- tensorflow/lite/micro/compression/view.py | 87 ++++--- .../lite/micro/compression/view_test.py | 88 +++++++ tensorflow/lite/micro/docs/compression.md | 2 +- .../lite/micro/examples/hello_world/BUILD | 8 +- .../micro/examples/memory_footprint/BUILD | 5 +- .../lite/micro/examples/micro_speech/BUILD | 18 +- .../micro/examples/person_detection/BUILD | 25 +- tensorflow/lite/micro/hexdump.cc | 103 ++++++++ tensorflow/lite/micro/hexdump.h | 35 +++ tensorflow/lite/micro/hexdump_test.cc | 58 +++++ .../micro/integration_tests/seanet/add/BUILD | 9 +- .../micro/integration_tests/seanet/conv/BUILD | 9 +- .../integration_tests/seanet/leaky_relu/BUILD | 9 +- .../micro/integration_tests/seanet/pad/BUILD | 9 +- .../integration_tests/seanet/quantize/BUILD | 9 +- .../seanet/strided_slice/BUILD | 9 +- .../micro/integration_tests/seanet/sub/BUILD | 9 +- .../seanet/transpose_conv/BUILD | 9 +- .../integration_tests/templates/BUILD.mako | 9 +- tensorflow/lite/micro/kernels/BUILD | 190 +++++++-------- tensorflow/lite/micro/kernels/testdata/BUILD | 7 +- .../lite/micro/memory_arena_threshold_test.cc | 13 +- tensorflow/lite/micro/memory_planner/BUILD | 17 +- tensorflow/lite/micro/micro_allocator.cc | 6 +- .../lite/micro/micro_mutable_op_resolver.h | 4 +- .../lite/micro/python/interpreter/src/BUILD | 5 +- tensorflow/lite/micro/test_helpers.cc | 2 +- tensorflow/lite/micro/testing/BUILD | 10 +- tensorflow/lite/micro/tflite_bridge/BUILD | 8 +- .../lite/micro/tools/benchmarking/BUILD | 16 +- .../micro/tools/ci_build/test_code_style.sh | 4 +- .../templates/BUILD.mako | 11 +- .../lite/micro/tools/make/pigweed.patch | 9 - .../tools/project_generation/BUILD.testing | 9 +- third_party/python_requirements.in | 1 + third_party/python_requirements.txt | 124 ++++++++++ 65 files changed, 952 insertions(+), 677 deletions(-) create mode 100644 .style.yapf delete mode 100644 ci/temp_patches/tf_update_visibility.patch delete mode 100644 tensorflow/lite/micro/compression/metadata_generated.h create mode 100644 tensorflow/lite/micro/compression/view_test.py create mode 100644 tensorflow/lite/micro/hexdump.cc create mode 100644 tensorflow/lite/micro/hexdump.h create mode 100644 tensorflow/lite/micro/hexdump_test.cc diff --git a/.github/workflows/sync.yml b/.github/workflows/sync.yml index 0c5434e4599..7e488c9bb32 100644 --- a/.github/workflows/sync.yml +++ b/.github/workflows/sync.yml @@ -62,5 +62,5 @@ jobs: author: TFLM-bot body: "BUG=automated sync from upstream\nNO_CHECK_TFLITE_FILES=automated sync from upstream" labels: bot:sync-tf, ci:run - reviewers: rascani + reviewers: suleshahid diff --git a/.style.yapf b/.style.yapf new file mode 100644 index 00000000000..de72b98c2c8 --- /dev/null +++ b/.style.yapf @@ -0,0 +1,3 @@ +[style] +based_on_style = pep8 +indent_width = 2 diff --git a/BUILD b/BUILD index 2b539afee8b..57eadaa224d 100644 --- a/BUILD +++ b/BUILD @@ -7,3 +7,17 @@ refresh_compile_commands( name = "refresh_compile_commands", targets = ["//..."], ) + +load("@bazel_skylib//rules:common_settings.bzl", "bool_flag") + +bool_flag( + name = "with_compression", + build_setting_default = False, +) + +config_setting( + name = "with_compression_enabled", + flag_values = { + ":with_compression": "True", + }, +) diff --git a/CODEOWNERS b/CODEOWNERS index 6c8b497338d..089515c2dc1 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -1,4 +1,4 @@ * @tensorflow/micro -/.github/ @advaitjain @rockyrhodes @rascani -/ci/ @advaitjain @rockyrhodes @rascani +/.github/ @advaitjain @rockyrhodes @suleshahid +/ci/ @advaitjain @rockyrhodes @suleshahid diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e1410af4f91..102614d4369 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -190,7 +190,7 @@ Below are some tips that might be useful and improve the development experience. ``` pip install yapf - yapf log_parser.py -i --style='{based_on_style: pep8, indent_width: 2}' + yapf log_parser.py -i' ``` * Add a git hook to check for code style etc. prior to creating a pull request: diff --git a/ci/temp_patches/tf_update_visibility.patch b/ci/temp_patches/tf_update_visibility.patch deleted file mode 100644 index a98061eac53..00000000000 --- a/ci/temp_patches/tf_update_visibility.patch +++ /dev/null @@ -1,34 +0,0 @@ -diff --git a/tensorflow/lite/micro/kernels/BUILD b/tensorflow/lite/micro/kernels/BUILD -index 22bcf2b1efd..6f27814f6a2 100644 ---- a/tensorflow/lite/micro/kernels/BUILD -+++ b/tensorflow/lite/micro/kernels/BUILD -@@ -1,7 +1,5 @@ --load( -- "//tensorflow/lite/micro:build_def.bzl", -- "micro_copts", --) -+load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") -+load("//tensorflow:extra_rules.bzl", "tflm_kernel_friends") - - package( - features = ["-layering_check"], -@@ -23,6 +21,11 @@ package_group( - packages = ["//tensorflow/lite/micro"], - ) - -+package_group( -+ name = "kernel_friends", -+ packages = tflm_kernel_friends(), -+) -+ - #################################### - # C++ libraries - #################################### -@@ -245,6 +248,7 @@ cc_library( - ], - hdrs = ["kernel_util.h"], - visibility = [ -+ ":kernel_friends", - ":micro", - ], - deps = [ diff --git a/codegen/build_def.bzl b/codegen/build_def.bzl index 28b6232b339..1b56300513e 100644 --- a/codegen/build_def.bzl +++ b/codegen/build_def.bzl @@ -1,6 +1,6 @@ """ Build rule for generating ML inference code from TFLite model. """ -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") +load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library") def tflm_inference_library( name, @@ -20,12 +20,12 @@ def tflm_inference_library( srcs = [tflite_model], outs = [name + ".h", name + ".cc"], tools = ["//codegen:code_generator"], - cmd = "$(location //codegen:code_generator) " + + cmd = "$(location //codegen:code_generator) --quiet " + "--model=$< --output_dir=$(RULEDIR) --output_name=%s" % name, visibility = ["//visibility:private"], ) - native.cc_library( + tflm_cc_library( name = name, hdrs = [name + ".h"], srcs = [name + ".cc"], @@ -39,6 +39,9 @@ def tflm_inference_library( "//tensorflow/lite/micro:micro_common", "//tensorflow/lite/micro:micro_context", ], - copts = micro_copts(), + target_compatible_with = select({ + "//conditions:default": [], + "//:with_compression_enabled": ["@platforms//:incompatible"], + }), visibility = visibility, ) diff --git a/codegen/code_generator.py b/codegen/code_generator.py index 91cab732f71..8a12632f173 100644 --- a/codegen/code_generator.py +++ b/codegen/code_generator.py @@ -15,6 +15,7 @@ """ Generates C/C++ source code capable of performing inference for a model. """ import os +import pathlib from absl import app from absl import flags @@ -22,7 +23,6 @@ from tflite_micro.codegen import inference_generator from tflite_micro.codegen import graph -from tflite_micro.tensorflow.lite.tools import flatbuffer_utils # Usage information: # Default: @@ -48,15 +48,33 @@ "'model' basename."), required=False) +_QUIET = flags.DEFINE_bool( + name="quiet", + default=False, + help="Suppress informational output (e.g., for use in for build system)", + required=False) + def main(argv: Sequence[str]) -> None: + if _QUIET.value: + restore = os.environ.get("TF_CPP_MIN_LOG_LEVEL", "0") + os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" + from tflite_micro.tensorflow.lite.tools import flatbuffer_utils + os.environ["TF_CPP_MIN_LOG_LEVEL"] = restore + else: + from tflite_micro.tensorflow.lite.tools import flatbuffer_utils + output_dir = _OUTPUT_DIR.value or os.path.dirname(_MODEL_PATH.value) output_name = _OUTPUT_NAME.value or os.path.splitext( os.path.basename(_MODEL_PATH.value))[0] model = flatbuffer_utils.read_model(_MODEL_PATH.value) - print("Generating inference code for model: {}".format(_MODEL_PATH.value)) + if not _QUIET.value: + print("Generating inference code for model: {}".format(_MODEL_PATH.value)) + output_path = pathlib.Path(output_dir) / output_name + print(f"Generating {output_path}.h") + print(f"Generating {output_path}.cc") inference_generator.generate(output_dir, output_name, graph.OpCodeTable([model]), graph.Graph(model)) diff --git a/codegen/inference_generator.py b/codegen/inference_generator.py index fe351f36550..44cec7d8816 100644 --- a/codegen/inference_generator.py +++ b/codegen/inference_generator.py @@ -35,7 +35,6 @@ class ModelData(TypedDict): def _render(output_file: pathlib.Path, template_file: pathlib.Path, model_data: ModelData) -> None: - print("Generating {}".format(output_file)) t = template.Template(filename=str(template_file)) with output_file.open('w+') as file: file.write(t.render(**model_data)) diff --git a/codegen/runtime/BUILD b/codegen/runtime/BUILD index d23cb7078eb..f50652a677b 100644 --- a/codegen/runtime/BUILD +++ b/codegen/runtime/BUILD @@ -1,12 +1,11 @@ -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") +load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library") package(default_visibility = ["//visibility:public"]) -cc_library( +tflm_cc_library( name = "micro_codegen_context", srcs = ["micro_codegen_context.cc"], hdrs = ["micro_codegen_context.h"], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/kernels:op_macros", diff --git a/python/tflite_micro/BUILD b/python/tflite_micro/BUILD index 4780b9a27ce..b557085b5bc 100644 --- a/python/tflite_micro/BUILD +++ b/python/tflite_micro/BUILD @@ -7,7 +7,7 @@ load("@rules_python//python:packaging.bzl", "py_package", "py_wheel") load("@tflm_pip_deps//:requirements.bzl", "requirement") load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_library", ) load( "//tensorflow:extra_rules.bzl", @@ -24,7 +24,7 @@ package_group( packages = tflm_python_op_resolver_friends(), ) -cc_library( +tflm_cc_library( name = "python_ops_resolver", srcs = [ "python_ops_resolver.cc", @@ -32,7 +32,6 @@ cc_library( hdrs = [ "python_ops_resolver.h", ], - copts = micro_copts(), visibility = [ ":op_resolver_friends", "//tensorflow/lite/micro/integration_tests:__subpackages__", diff --git a/python/tflite_micro/interpreter_wrapper.cc b/python/tflite_micro/interpreter_wrapper.cc index 53efe8e311b..b1be37388fd 100644 --- a/python/tflite_micro/interpreter_wrapper.cc +++ b/python/tflite_micro/interpreter_wrapper.cc @@ -104,11 +104,6 @@ bool CheckTensor(const TfLiteTensor* tensor) { return false; } - if (tensor->sparsity != nullptr) { - PyErr_SetString(PyExc_ValueError, "TFLM doesn't support sparse tensors"); - return false; - } - int py_type_num = TfLiteTypeToPyArrayType(tensor->type); if (py_type_num == NPY_NOTYPE) { PyErr_SetString(PyExc_ValueError, "Unknown tensor type."); diff --git a/signal/micro/kernels/BUILD b/signal/micro/kernels/BUILD index b7ac658246e..8ec3fb49cd2 100644 --- a/signal/micro/kernels/BUILD +++ b/signal/micro/kernels/BUILD @@ -1,11 +1,11 @@ load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_library", ) package(licenses = ["notice"]) -cc_library( +tflm_cc_library( name = "register_signal_ops", srcs = [ "delay.cc", @@ -31,7 +31,6 @@ cc_library( "irfft.h", "rfft.h", ], - copts = micro_copts(), visibility = [ "//tensorflow/lite/micro", ], diff --git a/tensorflow/compiler/mlir/lite/core/api/BUILD b/tensorflow/compiler/mlir/lite/core/api/BUILD index 7e5f6e21b2c..e25708623b9 100644 --- a/tensorflow/compiler/mlir/lite/core/api/BUILD +++ b/tensorflow/compiler/mlir/lite/core/api/BUILD @@ -1,15 +1,18 @@ load("//tensorflow/lite:build_def.bzl", "tflite_copts") -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_copts", +) package( default_visibility = ["//visibility:public"], licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "error_reporter", srcs = ["error_reporter.cc"], hdrs = ["error_reporter.h"], - copts = tflite_copts() + micro_copts(), + copts = tflm_copts() + tflite_copts(), deps = [], ) diff --git a/tensorflow/lite/core/api/BUILD b/tensorflow/lite/core/api/BUILD index 6b5937593ec..cfe75c46093 100644 --- a/tensorflow/lite/core/api/BUILD +++ b/tensorflow/lite/core/api/BUILD @@ -1,12 +1,15 @@ load("//tensorflow/lite:build_def.bzl", "tflite_copts") -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_copts", +) package( default_visibility = ["//visibility:private"], licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "api", srcs = [ "flatbuffer_conversions.cc", @@ -17,7 +20,7 @@ cc_library( "flatbuffer_conversions.h", "tensor_utils.h", ], - copts = tflite_copts() + micro_copts(), + copts = tflm_copts() + tflite_copts(), visibility = ["//visibility:public"], deps = [ ":error_reporter", @@ -33,13 +36,13 @@ cc_library( # also exported by the "api" target, so that targets which only want to depend # on these small abstract base class modules can express more fine-grained # dependencies without pulling in tensor_utils and flatbuffer_conversions. -cc_library( +tflm_cc_library( name = "error_reporter", hdrs = [ "error_reporter.h", "//tensorflow/compiler/mlir/lite/core/api:error_reporter.h", ], - copts = tflite_copts() + micro_copts(), + copts = tflm_copts() + tflite_copts(), visibility = [ "//visibility:public", ], diff --git a/tensorflow/lite/experimental/microfrontend/lib/BUILD b/tensorflow/lite/experimental/microfrontend/lib/BUILD index d42b16d926a..0d91e0e95d0 100644 --- a/tensorflow/lite/experimental/microfrontend/lib/BUILD +++ b/tensorflow/lite/experimental/microfrontend/lib/BUILD @@ -144,7 +144,7 @@ cc_test( name = "filterbank_test", srcs = ["filterbank_test.cc"], # Setting copts for experimental code to [], but this code should be fixed - # to build with the default copts (micro_copts()) + # to build with the default copts copts = [], deps = [ ":filterbank", @@ -156,7 +156,7 @@ cc_test( name = "frontend_test", srcs = ["frontend_test.cc"], # Setting copts for experimental code to [], but this code should be fixed - # to build with the default copts (micro_copts()) + # to build with the default copts copts = [], deps = [ ":frontend", @@ -168,7 +168,7 @@ cc_test( name = "log_scale_test", srcs = ["log_scale_test.cc"], # Setting copts for experimental code to [], but this code should be fixed - # to build with the default copts (micro_copts()) + # to build with the default copts copts = [], deps = [ ":log_scale", @@ -180,7 +180,7 @@ cc_test( name = "noise_reduction_test", srcs = ["noise_reduction_test.cc"], # Setting copts for experimental code to [], but this code should be fixed - # to build with the default copts (micro_copts()) + # to build with the default copts copts = [], deps = [ ":noise_reduction", @@ -192,7 +192,7 @@ cc_test( name = "pcan_gain_control_test", srcs = ["pcan_gain_control_test.cc"], # Setting copts for experimental code to [], but this code should be fixed - # to build with the default copts (micro_copts()) + # to build with the default copts copts = [], deps = [ ":pcan_gain_control", @@ -204,7 +204,7 @@ cc_test( name = "window_test", srcs = ["window_test.cc"], # Setting copts for experimental code to [], but this code should be fixed - # to build with the default copts (micro_copts()) + # to build with the default copts copts = [], deps = [ ":window", diff --git a/tensorflow/lite/kernels/BUILD b/tensorflow/lite/kernels/BUILD index 4b48db1ee83..1d34fbdc58f 100644 --- a/tensorflow/lite/kernels/BUILD +++ b/tensorflow/lite/kernels/BUILD @@ -1,5 +1,8 @@ load("//tensorflow/lite:build_def.bzl", "tflite_copts") -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_copts", +) package( default_visibility = [ @@ -17,7 +20,7 @@ cc_library( deps = ["//tensorflow/lite/micro:micro_log"], ) -cc_library( +tflm_cc_library( name = "kernel_util", srcs = [ "kernel_util.cc", @@ -25,7 +28,7 @@ cc_library( hdrs = [ "kernel_util.h", ], - copts = tflite_copts() + micro_copts(), + copts = tflm_copts() + tflite_copts(), deps = [ "//tensorflow/lite:array", "//tensorflow/lite:kernel_api", diff --git a/tensorflow/lite/kernels/internal/BUILD b/tensorflow/lite/kernels/internal/BUILD index 17b868aa70c..cbe5d329a2e 100644 --- a/tensorflow/lite/kernels/internal/BUILD +++ b/tensorflow/lite/kernels/internal/BUILD @@ -1,5 +1,8 @@ load("//tensorflow/lite:build_def.bzl", "tflite_copts") -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_copts", +) package( default_visibility = [ @@ -44,11 +47,11 @@ cc_library( copts = tflite_copts(), ) -cc_library( +tflm_cc_library( name = "quantization_util", srcs = ["quantization_util.cc"], hdrs = ["quantization_util.h"], - copts = tflite_copts() + micro_copts(), + copts = tflm_copts() + tflite_copts(), deps = [ ":compatibility", ":cppmath", diff --git a/tensorflow/lite/micro/BUILD b/tensorflow/lite/micro/BUILD index 1753465425d..8cc9a19d304 100644 --- a/tensorflow/lite/micro/BUILD +++ b/tensorflow/lite/micro/BUILD @@ -1,7 +1,10 @@ load("@bazel_skylib//:bzl_library.bzl", "bzl_library") load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_binary", + "tflm_cc_library", + "tflm_cc_test", + "tflm_copts", ) package( @@ -18,15 +21,14 @@ package_group( packages = ["//tensorflow/lite/micro/..."], ) -cc_library( +tflm_cc_library( name = "micro_compatibility", hdrs = [ "compatibility.h", ], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( # TODO(b/187093492): Rename to micro_interpreter. name = "micro_framework", srcs = [ @@ -35,7 +37,6 @@ cc_library( hdrs = [ "micro_interpreter.h", ], - copts = micro_copts(), deps = [ ":memory_helpers", ":micro_allocator", @@ -53,7 +54,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_context", srcs = [ "micro_context.cc", @@ -61,16 +62,17 @@ cc_library( hdrs = [ "micro_context.h", ], - copts = micro_copts(), deps = [ ":micro_common", ":micro_graph", ":micro_log", + ":micro_profiler", "//tensorflow/lite/c:common", + "//tensorflow/lite:type_to_tflitetype", ], ) -cc_library( +tflm_cc_library( name = "micro_interpreter_context", srcs = [ "micro_interpreter_context.cc", @@ -78,7 +80,6 @@ cc_library( hdrs = [ "micro_interpreter_context.h", ], - copts = micro_copts(), deps = [ ":memory_helpers", ":micro_allocator", @@ -90,18 +91,17 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_common", hdrs = [ "micro_common.h", ], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", ], ) -cc_library( +tflm_cc_library( name = "fake_micro_context", srcs = [ "fake_micro_context.cc", @@ -109,7 +109,6 @@ cc_library( hdrs = [ "fake_micro_context.h", ], - copts = micro_copts(), deps = [ ":memory_helpers", ":micro_arena_constants", @@ -121,10 +120,9 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_graph", hdrs = ["micro_graph.h"], - copts = micro_copts(), deps = [ ":micro_common", ":micro_resource_variable", @@ -132,11 +130,10 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_interpreter_graph", srcs = ["micro_interpreter_graph.cc"], hdrs = ["micro_interpreter_graph.h"], - copts = micro_copts(), deps = [ ":memory_helpers", ":micro_allocator", @@ -152,11 +149,10 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "mock_micro_graph", srcs = ["mock_micro_graph.cc"], hdrs = ["mock_micro_graph.h"], - copts = micro_copts(), deps = [ ":micro_allocator", ":micro_graph", @@ -166,9 +162,10 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_allocator", srcs = [ + "compression.h", "micro_allocation_info.cc", "micro_allocator.cc", ], @@ -176,7 +173,6 @@ cc_library( "micro_allocation_info.h", "micro_allocator.h", ], - copts = micro_copts(), deps = [ ":flatbuffer_utils", ":memory_helpers", @@ -190,6 +186,7 @@ cc_library( "//tensorflow/lite/micro/arena_allocator:non_persistent_arena_buffer_allocator", "//tensorflow/lite/micro/arena_allocator:persistent_arena_buffer_allocator", "//tensorflow/lite/micro/arena_allocator:simple_memory_allocator", + "//tensorflow/lite/micro/compression:metadata_saved", "//tensorflow/lite/micro/memory_planner:greedy_memory_planner", "//tensorflow/lite/micro/memory_planner:linear_memory_planner", "//tensorflow/lite/micro/memory_planner:micro_memory_planner", @@ -200,20 +197,18 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_arena_constants", hdrs = [ "micro_arena_constants.h", ], - copts = micro_copts(), deps = [], ) -cc_library( +tflm_cc_library( name = "flatbuffer_utils", srcs = ["flatbuffer_utils.cc"], hdrs = ["flatbuffer_utils.h"], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/schema:schema_fbs", @@ -221,11 +216,10 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "memory_helpers", srcs = ["memory_helpers.cc"], hdrs = ["memory_helpers.h"], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/kernels/internal:reference", @@ -235,7 +229,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "test_helpers", srcs = [ "test_helper_custom_ops.cc", @@ -245,7 +239,6 @@ cc_library( "test_helper_custom_ops.h", "test_helpers.h", ], - copts = micro_copts(), deps = [ ":memory_helpers", ":micro_utils", @@ -260,7 +253,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "op_resolvers", srcs = [ "micro_op_resolver.cc", @@ -269,7 +262,6 @@ cc_library( "micro_mutable_op_resolver.h", "micro_op_resolver.h", ], - copts = micro_copts(), deps = [ ":micro_compatibility", ":micro_log", @@ -283,7 +275,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "debug_log", srcs = [ "debug_log.cc", @@ -291,10 +283,9 @@ cc_library( hdrs = [ "debug_log.h", ], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( name = "micro_log", srcs = [ "micro_log.cc", @@ -302,13 +293,12 @@ cc_library( hdrs = [ "micro_log.h", ], - copts = micro_copts(), deps = [ ":debug_log", ], ) -cc_library( +tflm_cc_library( name = "micro_resource_variable", srcs = [ "micro_resource_variable.cc", @@ -316,7 +306,6 @@ cc_library( hdrs = [ "micro_resource_variable.h", ], - copts = micro_copts(), deps = [ ":micro_allocator", ":micro_log", @@ -326,7 +315,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_time", srcs = [ "micro_time.cc", @@ -334,19 +323,18 @@ cc_library( hdrs = [ "micro_time.h", ], - copts = micro_copts() + ["-DTF_LITE_USE_CTIME"], + copts = tflm_copts() + ["-DTF_LITE_USE_CTIME"], deps = ["//tensorflow/lite/c:common"], ) -cc_library( +tflm_cc_library( name = "micro_profiler_interface", hdrs = [ "micro_profiler_interface.h", ], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( name = "micro_profiler", srcs = [ "micro_profiler.cc", @@ -354,7 +342,6 @@ cc_library( hdrs = [ "micro_profiler.h", ], - copts = micro_copts(), deps = [ ":micro_compatibility", ":micro_log", @@ -364,7 +351,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_utils", srcs = [ "micro_utils.cc", @@ -372,7 +359,6 @@ cc_library( hdrs = [ "micro_utils.h", ], - copts = micro_copts(), deps = [ ":memory_helpers", ":micro_log", @@ -381,7 +367,21 @@ cc_library( ], ) -cc_library( +tflm_cc_library( + name = "hexdump", + srcs = [ + "hexdump.cc", + ], + hdrs = [ + "hexdump.h", + ], + deps = [ + ":span", + ":static_vector", + ], +) + +tflm_cc_library( name = "recording_allocators", srcs = [ "recording_micro_allocator.cc", @@ -390,7 +390,6 @@ cc_library( "recording_micro_allocator.h", "recording_micro_interpreter.h", ], - copts = micro_copts(), deps = [ ":micro_allocator", ":micro_compatibility", @@ -401,22 +400,20 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "span", hdrs = ["span.h"], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( name = "static_vector", hdrs = ["static_vector.h"], - copts = micro_copts(), deps = [ "//tensorflow/lite/kernels:op_macros", ], ) -cc_library( +tflm_cc_library( name = "system_setup", srcs = [ "system_setup.cc", @@ -424,10 +421,9 @@ cc_library( hdrs = [ "system_setup.h", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "micro_log_test", srcs = [ "micro_log_test.cc", @@ -439,7 +435,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_mutable_op_resolver_test", srcs = [ "micro_mutable_op_resolver_test.cc", @@ -451,7 +447,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_interpreter_context_test", srcs = [ "micro_interpreter_context_test.cc", @@ -465,7 +461,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "fake_micro_context_test", srcs = [ "fake_micro_context_test.cc", @@ -479,7 +475,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_interpreter_test", srcs = [ "micro_interpreter_test.cc", @@ -496,7 +492,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_allocator_test", srcs = [ "micro_allocator_test.cc", @@ -514,7 +510,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_allocation_info_test", srcs = [ "micro_allocation_info_test.cc", @@ -526,7 +522,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "recording_micro_allocator_test", srcs = [ "recording_micro_allocator_test.cc", @@ -541,7 +537,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "flatbuffer_utils_test", srcs = [ "flatbuffer_utils_test.cc", @@ -556,7 +552,19 @@ cc_test( ], ) -cc_test( +tflm_cc_test( + name = "hexdump_test", + size = "small", + srcs = [ + "hexdump_test.cc", + ], + deps = [ + ":hexdump", + "//tensorflow/lite/micro/testing:micro_test", + ], +) + +tflm_cc_test( name = "memory_helpers_test", srcs = [ "memory_helpers_test.cc", @@ -568,7 +576,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "span_test", size = "small", srcs = [ @@ -580,7 +588,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "testing_helpers_test", srcs = [ "testing_helpers_test.cc", @@ -592,7 +600,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_utils_test", srcs = [ "micro_utils_test.cc", @@ -603,7 +611,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_time_test", srcs = [ "micro_time_test.cc", @@ -614,7 +622,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "micro_resource_variable_test", srcs = ["micro_resource_variable_test.cc"], deps = [ @@ -623,7 +631,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "memory_arena_threshold_test", srcs = [ "memory_arena_threshold_test.cc", @@ -637,7 +645,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "static_vector_test", size = "small", srcs = [ diff --git a/tensorflow/lite/micro/arena_allocator/BUILD b/tensorflow/lite/micro/arena_allocator/BUILD index bd5ea7024ed..1f1b2951093 100644 --- a/tensorflow/lite/micro/arena_allocator/BUILD +++ b/tensorflow/lite/micro/arena_allocator/BUILD @@ -1,6 +1,6 @@ -load( - "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -10,22 +10,20 @@ package( licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "ibuffer_allocator", hdrs = [ "ibuffer_allocator.h", ], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", ], ) -cc_library( +tflm_cc_library( name = "non_persistent_arena_buffer_allocator", srcs = ["non_persistent_arena_buffer_allocator.cc"], hdrs = ["non_persistent_arena_buffer_allocator.h"], - copts = micro_copts(), deps = [ ":ibuffer_allocator", "//tensorflow/lite/c:common", @@ -36,7 +34,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "non_persistent_arena_buffer_allocator_test", srcs = ["non_persistent_arena_buffer_allocator_test.cc"], deps = [ @@ -46,11 +44,10 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "persistent_arena_buffer_allocator", srcs = ["persistent_arena_buffer_allocator.cc"], hdrs = ["persistent_arena_buffer_allocator.h"], - copts = micro_copts(), deps = [ ":ibuffer_allocator", "//tensorflow/lite/c:common", @@ -61,7 +58,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "persistent_arena_buffer_allocator_test", srcs = ["persistent_arena_buffer_allocator_test.cc"], deps = [ @@ -71,7 +68,7 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "simple_memory_allocator", srcs = [ "single_arena_buffer_allocator.cc", @@ -79,7 +76,6 @@ cc_library( hdrs = [ "single_arena_buffer_allocator.h", ], - copts = micro_copts(), deps = [ ":ibuffer_allocator", "//tensorflow/lite/c:common", @@ -90,7 +86,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "simple_memory_allocator_test", srcs = [ "single_arena_buffer_allocator_test.cc", @@ -102,7 +98,7 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "recording_simple_memory_allocator", srcs = [ "recording_single_arena_buffer_allocator.cc", @@ -110,7 +106,6 @@ cc_library( hdrs = [ "recording_single_arena_buffer_allocator.h", ], - copts = micro_copts(), deps = [ ":simple_memory_allocator", "//tensorflow/lite/kernels/internal:compatibility", @@ -119,7 +114,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "recording_simple_memory_allocator_test", srcs = [ "recording_single_arena_buffer_allocator_test.cc", diff --git a/tensorflow/lite/micro/arena_allocator/non_persistent_arena_buffer_allocator.h b/tensorflow/lite/micro/arena_allocator/non_persistent_arena_buffer_allocator.h index ebd376466b6..69b049404c0 100644 --- a/tensorflow/lite/micro/arena_allocator/non_persistent_arena_buffer_allocator.h +++ b/tensorflow/lite/micro/arena_allocator/non_persistent_arena_buffer_allocator.h @@ -74,8 +74,6 @@ class NonPersistentArenaBufferAllocator : public INonPersistentBufferAllocator { // takes in account any temporary allocations. size_t GetAvailableMemory(size_t alignment) const override; - TF_LITE_REMOVE_VIRTUAL_DELETE - private: // The memory arena that this allocator manages. uint8_t* const buffer_head_; @@ -97,6 +95,8 @@ class NonPersistentArenaBufferAllocator : public INonPersistentBufferAllocator { // Count of outstanding temp buffers. int temp_buffer_count_ = 0; bool resizable_buffer_allocated_ = false; + + TF_LITE_REMOVE_VIRTUAL_DELETE }; } // namespace tflite diff --git a/tensorflow/lite/micro/arena_allocator/persistent_arena_buffer_allocator.h b/tensorflow/lite/micro/arena_allocator/persistent_arena_buffer_allocator.h index 2c8e3dca53b..a86d425d7c6 100644 --- a/tensorflow/lite/micro/arena_allocator/persistent_arena_buffer_allocator.h +++ b/tensorflow/lite/micro/arena_allocator/persistent_arena_buffer_allocator.h @@ -39,7 +39,6 @@ class PersistentArenaBufferAllocator : public IPersistentBufferAllocator { // Returns the size of all persistent allocations in bytes. size_t GetPersistentUsedBytes() const override; - TF_LITE_REMOVE_VIRTUAL_DELETE private: // The memory arena that this allocator manages. uint8_t* const buffer_head_; @@ -51,6 +50,8 @@ class PersistentArenaBufferAllocator : public IPersistentBufferAllocator { // So in essence, the allocated region grows from the bottom and emulates // SingleArenaBufferAllocator's persistent part. uint8_t* tail_temp_; + + TF_LITE_REMOVE_VIRTUAL_DELETE }; } // namespace tflite diff --git a/tensorflow/lite/micro/arena_allocator/single_arena_buffer_allocator.h b/tensorflow/lite/micro/arena_allocator/single_arena_buffer_allocator.h index a2e39588963..771c2deb436 100644 --- a/tensorflow/lite/micro/arena_allocator/single_arena_buffer_allocator.h +++ b/tensorflow/lite/micro/arena_allocator/single_arena_buffer_allocator.h @@ -110,8 +110,6 @@ class SingleArenaBufferAllocator : public INonPersistentBufferAllocator, // account any temporary allocations. size_t GetUsedBytes() const; - TF_LITE_REMOVE_VIRTUAL_DELETE - protected: // Returns a pointer to the current end of the head buffer. uint8_t* head() const; @@ -137,6 +135,8 @@ class SingleArenaBufferAllocator : public INonPersistentBufferAllocator, intptr_t temp_buffer_ptr_check_sum_ = 0; // Count of outstanding temp buffers. int temp_buffer_count_ = 0; + + TF_LITE_REMOVE_VIRTUAL_DELETE }; } // namespace tflite diff --git a/tensorflow/lite/micro/benchmarks/BUILD b/tensorflow/lite/micro/benchmarks/BUILD index d0f67b6d60e..bc7e539dd0f 100644 --- a/tensorflow/lite/micro/benchmarks/BUILD +++ b/tensorflow/lite/micro/benchmarks/BUILD @@ -1,5 +1,11 @@ # Description: # TensorFlow Lite microcontroller benchmarks. + +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_binary", + "tflm_cc_library", +) + package( # Disabling layering_check because of http://b/177257332 features = ["-layering_check"], @@ -11,7 +17,7 @@ package_group( packages = ["//tensorflow/lite/micro"], ) -cc_library( +tflm_cc_library( name = "micro_benchmark", hdrs = [ "micro_benchmark.h", @@ -29,7 +35,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "keyword_scrambled_model_data", srcs = [ "//tensorflow/lite/micro/models:generated_keyword_scrambled_model_cc", @@ -42,7 +48,7 @@ cc_library( ], ) -cc_binary( +tflm_cc_binary( name = "keyword_benchmark", srcs = ["keyword_benchmark.cc"], deps = [ @@ -57,7 +63,7 @@ cc_binary( ], ) -cc_library( +tflm_cc_library( name = "keyword_scrambled_8bit_model_data", srcs = [ "//tensorflow/lite/micro/models:generated_keyword_scrambled_8bit_model_cc", @@ -68,7 +74,7 @@ cc_library( visibility = ["//visibility:private"], ) -cc_binary( +tflm_cc_binary( name = "keyword_benchmark_8bit", srcs = ["keyword_benchmark_8bit.cc"], deps = [ @@ -82,7 +88,7 @@ cc_binary( ], ) -cc_binary( +tflm_cc_binary( name = "person_detection_benchmark", srcs = ["person_detection_benchmark.cc"], deps = [ diff --git a/tensorflow/lite/micro/build_def.bzl b/tensorflow/lite/micro/build_def.bzl index 82768f14e52..37be5cb36bf 100644 --- a/tensorflow/lite/micro/build_def.bzl +++ b/tensorflow/lite/micro/build_def.bzl @@ -1,11 +1,47 @@ -def micro_copts(): +def tflm_copts(): + """Returns the default copts for targets in TFLM. + + This function returns the default copts used by tflm_cc_* targets in TFLM. + It is typically unnecessary to use this function directly; however, it may + be useful when additively overriding the defaults for a particular target. + """ return [ + "-fno-exceptions", "-Wall", "-Wno-unused-parameter", "-Wnon-virtual-dtor", "-DFLATBUFFERS_LOCALE_INDEPENDENT=0", ] +def tflm_defines(): + return [ + "TF_LITE_STATIC_MEMORY=1", + ] + select({ + "//:with_compression_enabled": ["USE_TFLM_COMPRESSION=1"], + "//conditions:default": [], + }) + +def tflm_cc_binary(copts = tflm_copts(), defines = tflm_defines(), **kwargs): + native.cc_binary( + copts = copts, + defines = defines, + **kwargs + ) + +def tflm_cc_library(copts = tflm_copts(), defines = tflm_defines(), **kwargs): + native.cc_library( + copts = copts, + defines = defines, + **kwargs + ) + +def tflm_cc_test(copts = tflm_copts(), defines = tflm_defines(), **kwargs): + native.cc_test( + copts = copts, + defines = defines, + **kwargs + ) + def generate_cc_arrays(name, src, out, visibility = None): native.genrule( name = name, @@ -70,7 +106,7 @@ def tflm_kernel_cc_library( all_srcs[target] = all_target_srcs - native.cc_library( + tflm_cc_library( name = name, srcs = select(all_srcs), hdrs = hdrs, diff --git a/tensorflow/lite/micro/compression/BUILD b/tensorflow/lite/micro/compression/BUILD index c5bf221686b..6278310e215 100644 --- a/tensorflow/lite/micro/compression/BUILD +++ b/tensorflow/lite/micro/compression/BUILD @@ -1,3 +1,7 @@ +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_cc_test", +) load( "@flatbuffers//:build_defs.bzl", "flatbuffer_cc_library", @@ -19,7 +23,7 @@ flatbuffer_cc_library( srcs = ["metadata.fbs"], ) -cc_library( +tflm_cc_library( # The header-only library generated by flatc in ":metadata_cc" is saved to # the source tree and comitted to git as "metadata_saved.h", which is used # by code which builds via the Make build system, which has no means of @@ -50,7 +54,7 @@ flatbuffer_py_library( srcs = ["metadata.fbs"], ) -cc_test( +tflm_cc_test( name = "metadata_test_cc", size = "small", srcs = ["metadata_test.cc"], @@ -75,6 +79,7 @@ py_binary( "@absl_py//absl/flags", "@absl_py//absl/logging", "@flatbuffers//:runtime_py", + requirement("bitarray"), requirement("numpy"), ], ) @@ -92,30 +97,28 @@ py_test( ) py_binary( - name = "discretize", + name = "view", srcs = [ - "discretize.py", + "view.py", ], deps = [ ":metadata_py", "//tensorflow/lite/python:schema_py", "@absl_py//absl:app", - "@absl_py//absl/flags", - "@absl_py//absl/logging", - "@flatbuffers//:runtime_py", - requirement("numpy"), + requirement("bitarray"), ], ) -py_binary( - name = "view", +py_test( + name = "view_test", + size = "small", srcs = [ - "view.py", + "view_test.py", ], deps = [ - ":metadata_py", - "//tensorflow/lite/python:schema_py", - "@absl_py//absl:app", + ":test_models", + ":view", + "@absl_py//absl/testing:absltest", ], ) diff --git a/tensorflow/lite/micro/compression/compress.py b/tensorflow/lite/micro/compression/compress.py index 70e7593147c..01c3415bef7 100644 --- a/tensorflow/lite/micro/compression/compress.py +++ b/tensorflow/lite/micro/compression/compress.py @@ -31,6 +31,9 @@ from functools import reduce from typing import Sequence import math +import os +import sys +import textwrap from tflite_micro.tensorflow.lite.micro.compression import ( lib, @@ -246,4 +249,9 @@ def main(argv): if __name__ == "__main__": + name = os.path.basename(sys.argv[0]) + usage = textwrap.dedent(f"""\ + Usage: {name} [--tensors=] [--alt_axis_tensors=] + Compress a .tflite model.""") + sys.modules['__main__'].__doc__ = usage absl.app.run(main) diff --git a/tensorflow/lite/micro/compression/metadata_generated.h b/tensorflow/lite/micro/compression/metadata_generated.h deleted file mode 100644 index 6b3af3b3e2e..00000000000 --- a/tensorflow/lite/micro/compression/metadata_generated.h +++ /dev/null @@ -1,228 +0,0 @@ -// automatically generated by the FlatBuffers compiler, do not modify - - -#ifndef FLATBUFFERS_GENERATED_METADATA_TFLITE_MICRO_COMPRESSION_H_ -#define FLATBUFFERS_GENERATED_METADATA_TFLITE_MICRO_COMPRESSION_H_ - -#include "flatbuffers/flatbuffers.h" - -// Ensure the included flatbuffers.h is the same version as when this file was -// generated, otherwise it may not be compatible. -static_assert(FLATBUFFERS_VERSION_MAJOR == 23 && - FLATBUFFERS_VERSION_MINOR == 5 && - FLATBUFFERS_VERSION_REVISION == 26, - "Non-compatible flatbuffers version included"); - -namespace tflite { -namespace micro { -namespace compression { - -struct Metadata; -struct MetadataBuilder; - -struct Subgraph; -struct SubgraphBuilder; - -struct LutTensor; -struct LutTensorBuilder; - -struct Metadata FLATBUFFERS_FINAL_CLASS : private ::flatbuffers::Table { - typedef MetadataBuilder Builder; - enum FlatBuffersVTableOffset FLATBUFFERS_VTABLE_UNDERLYING_TYPE { - VT_SUBGRAPHS = 4 - }; - const ::flatbuffers::Vector<::flatbuffers::Offset> *subgraphs() const { - return GetPointer> *>(VT_SUBGRAPHS); - } - bool Verify(::flatbuffers::Verifier &verifier) const { - return VerifyTableStart(verifier) && - VerifyOffset(verifier, VT_SUBGRAPHS) && - verifier.VerifyVector(subgraphs()) && - verifier.VerifyVectorOfTables(subgraphs()) && - verifier.EndTable(); - } -}; - -struct MetadataBuilder { - typedef Metadata Table; - ::flatbuffers::FlatBufferBuilder &fbb_; - ::flatbuffers::uoffset_t start_; - void add_subgraphs(::flatbuffers::Offset<::flatbuffers::Vector<::flatbuffers::Offset>> subgraphs) { - fbb_.AddOffset(Metadata::VT_SUBGRAPHS, subgraphs); - } - explicit MetadataBuilder(::flatbuffers::FlatBufferBuilder &_fbb) - : fbb_(_fbb) { - start_ = fbb_.StartTable(); - } - ::flatbuffers::Offset Finish() { - const auto end = fbb_.EndTable(start_); - auto o = ::flatbuffers::Offset(end); - return o; - } -}; - -inline ::flatbuffers::Offset CreateMetadata( - ::flatbuffers::FlatBufferBuilder &_fbb, - ::flatbuffers::Offset<::flatbuffers::Vector<::flatbuffers::Offset>> subgraphs = 0) { - MetadataBuilder builder_(_fbb); - builder_.add_subgraphs(subgraphs); - return builder_.Finish(); -} - -inline ::flatbuffers::Offset CreateMetadataDirect( - ::flatbuffers::FlatBufferBuilder &_fbb, - const std::vector<::flatbuffers::Offset> *subgraphs = nullptr) { - auto subgraphs__ = subgraphs ? _fbb.CreateVector<::flatbuffers::Offset>(*subgraphs) : 0; - return tflite::micro::compression::CreateMetadata( - _fbb, - subgraphs__); -} - -struct Subgraph FLATBUFFERS_FINAL_CLASS : private ::flatbuffers::Table { - typedef SubgraphBuilder Builder; - enum FlatBuffersVTableOffset FLATBUFFERS_VTABLE_UNDERLYING_TYPE { - VT_LUT_TENSORS = 4 - }; - const ::flatbuffers::Vector<::flatbuffers::Offset> *lut_tensors() const { - return GetPointer> *>(VT_LUT_TENSORS); - } - bool Verify(::flatbuffers::Verifier &verifier) const { - return VerifyTableStart(verifier) && - VerifyOffset(verifier, VT_LUT_TENSORS) && - verifier.VerifyVector(lut_tensors()) && - verifier.VerifyVectorOfTables(lut_tensors()) && - verifier.EndTable(); - } -}; - -struct SubgraphBuilder { - typedef Subgraph Table; - ::flatbuffers::FlatBufferBuilder &fbb_; - ::flatbuffers::uoffset_t start_; - void add_lut_tensors(::flatbuffers::Offset<::flatbuffers::Vector<::flatbuffers::Offset>> lut_tensors) { - fbb_.AddOffset(Subgraph::VT_LUT_TENSORS, lut_tensors); - } - explicit SubgraphBuilder(::flatbuffers::FlatBufferBuilder &_fbb) - : fbb_(_fbb) { - start_ = fbb_.StartTable(); - } - ::flatbuffers::Offset Finish() { - const auto end = fbb_.EndTable(start_); - auto o = ::flatbuffers::Offset(end); - return o; - } -}; - -inline ::flatbuffers::Offset CreateSubgraph( - ::flatbuffers::FlatBufferBuilder &_fbb, - ::flatbuffers::Offset<::flatbuffers::Vector<::flatbuffers::Offset>> lut_tensors = 0) { - SubgraphBuilder builder_(_fbb); - builder_.add_lut_tensors(lut_tensors); - return builder_.Finish(); -} - -inline ::flatbuffers::Offset CreateSubgraphDirect( - ::flatbuffers::FlatBufferBuilder &_fbb, - const std::vector<::flatbuffers::Offset> *lut_tensors = nullptr) { - auto lut_tensors__ = lut_tensors ? _fbb.CreateVector<::flatbuffers::Offset>(*lut_tensors) : 0; - return tflite::micro::compression::CreateSubgraph( - _fbb, - lut_tensors__); -} - -struct LutTensor FLATBUFFERS_FINAL_CLASS : private ::flatbuffers::Table { - typedef LutTensorBuilder Builder; - enum FlatBuffersVTableOffset FLATBUFFERS_VTABLE_UNDERLYING_TYPE { - VT_TENSOR = 4, - VT_VALUE_BUFFER = 6, - VT_INDEX_BITWIDTH = 8 - }; - int32_t tensor() const { - return GetField(VT_TENSOR, 0); - } - uint32_t value_buffer() const { - return GetField(VT_VALUE_BUFFER, 0); - } - uint8_t index_bitwidth() const { - return GetField(VT_INDEX_BITWIDTH, 0); - } - bool Verify(::flatbuffers::Verifier &verifier) const { - return VerifyTableStart(verifier) && - VerifyField(verifier, VT_TENSOR, 4) && - VerifyField(verifier, VT_VALUE_BUFFER, 4) && - VerifyField(verifier, VT_INDEX_BITWIDTH, 1) && - verifier.EndTable(); - } -}; - -struct LutTensorBuilder { - typedef LutTensor Table; - ::flatbuffers::FlatBufferBuilder &fbb_; - ::flatbuffers::uoffset_t start_; - void add_tensor(int32_t tensor) { - fbb_.AddElement(LutTensor::VT_TENSOR, tensor, 0); - } - void add_value_buffer(uint32_t value_buffer) { - fbb_.AddElement(LutTensor::VT_VALUE_BUFFER, value_buffer, 0); - } - void add_index_bitwidth(uint8_t index_bitwidth) { - fbb_.AddElement(LutTensor::VT_INDEX_BITWIDTH, index_bitwidth, 0); - } - explicit LutTensorBuilder(::flatbuffers::FlatBufferBuilder &_fbb) - : fbb_(_fbb) { - start_ = fbb_.StartTable(); - } - ::flatbuffers::Offset Finish() { - const auto end = fbb_.EndTable(start_); - auto o = ::flatbuffers::Offset(end); - return o; - } -}; - -inline ::flatbuffers::Offset CreateLutTensor( - ::flatbuffers::FlatBufferBuilder &_fbb, - int32_t tensor = 0, - uint32_t value_buffer = 0, - uint8_t index_bitwidth = 0) { - LutTensorBuilder builder_(_fbb); - builder_.add_value_buffer(value_buffer); - builder_.add_tensor(tensor); - builder_.add_index_bitwidth(index_bitwidth); - return builder_.Finish(); -} - -inline const tflite::micro::compression::Metadata *GetMetadata(const void *buf) { - return ::flatbuffers::GetRoot(buf); -} - -inline const tflite::micro::compression::Metadata *GetSizePrefixedMetadata(const void *buf) { - return ::flatbuffers::GetSizePrefixedRoot(buf); -} - -inline bool VerifyMetadataBuffer( - ::flatbuffers::Verifier &verifier) { - return verifier.VerifyBuffer(nullptr); -} - -inline bool VerifySizePrefixedMetadataBuffer( - ::flatbuffers::Verifier &verifier) { - return verifier.VerifySizePrefixedBuffer(nullptr); -} - -inline void FinishMetadataBuffer( - ::flatbuffers::FlatBufferBuilder &fbb, - ::flatbuffers::Offset root) { - fbb.Finish(root); -} - -inline void FinishSizePrefixedMetadataBuffer( - ::flatbuffers::FlatBufferBuilder &fbb, - ::flatbuffers::Offset root) { - fbb.FinishSizePrefixed(root); -} - -} // namespace compression -} // namespace micro -} // namespace tflite - -#endif // FLATBUFFERS_GENERATED_METADATA_TFLITE_MICRO_COMPRESSION_H_ diff --git a/tensorflow/lite/micro/compression/test_models.py b/tensorflow/lite/micro/compression/test_models.py index e3dc951a369..b25782cfbc6 100644 --- a/tensorflow/lite/micro/compression/test_models.py +++ b/tensorflow/lite/micro/compression/test_models.py @@ -30,6 +30,9 @@ def build(spec: dict) -> bytearray: A tflite flatbuffer. """ root = tflite.ModelT() + description = spec.get("description") + if description is not None: + root.description = description root.operatorCodes = [] for id, operator_code in spec["operator_codes"].items(): @@ -57,7 +60,7 @@ def build(spec: dict) -> bytearray: for id, tensor in subgraph["tensors"].items(): assert id == len(subgraph_t.tensors) tensor_t = tflite.TensorT() - tensor_t.name = tensor.get("name", f"tensor{id}") + tensor_t.name = tensor.get("name", None) tensor_t.shape = tensor["shape"] tensor_t.type = tensor["type"] tensor_t.buffer = tensor["buffer"] diff --git a/tensorflow/lite/micro/compression/view.py b/tensorflow/lite/micro/compression/view.py index 5719b0841af..9de3acbc445 100644 --- a/tensorflow/lite/micro/compression/view.py +++ b/tensorflow/lite/micro/compression/view.py @@ -12,9 +12,12 @@ # See the License for the specific language governing permissions and # limitations under the License. -import pprint import bitarray import bitarray.util +import pprint +import textwrap +import os +import sys import lib from tensorflow.lite.micro.compression import metadata_py_generated as compression_schema @@ -53,22 +56,41 @@ def unpack_TensorType(type): return lut[type] +def _decode_name(name): + """Returns name as a str or 'None'. + + The flatbuffer library returns names as bytes objects or None. This function + returns a str, decoded from the bytes object, or None. + """ + if name is None: + return None + else: + return str(name, encoding="utf-8") + + def unpack_tensors(tensors): result = [] for index, t in enumerate(tensors): d = { - "_index": index, - "name": t.name.decode("utf-8"), + "_tensor": index, + "name": _decode_name(t.name), "type": unpack_TensorType(t.type), - "variable": t.isVariable, "shape": unpack_array(t.shape), "buffer": t.buffer, } - if t.quantization is not None: - d["quantization"] = [ - unpack_array(t.quantization.scale), - unpack_array(t.quantization.zeroPoint) - ] + + if t.isVariable: + d["is_variable"] = True + else: + # don't display this unusual field + pass + + if t.quantization is not None and t.quantization.scale is not None: + d["quantization"] = { + "scale": unpack_array(t.quantization.scale), + "zero": unpack_array(t.quantization.zeroPoint), + "dimension": t.quantization.quantizedDimension, + } result.append(d) return result @@ -78,7 +100,7 @@ def unpack_subgraphs(subgraphs): for index, s in enumerate(subgraphs): d = { "_index": index, - "name": s.name, + "name": _decode_name(s.name), # "inputs": s.inputs, # "outputs": s.outputs, "operators": unpack_operators(s.operators), @@ -92,7 +114,7 @@ def unpack_metadata(metadata): if metadata is None: return None return [{ - "name": m.name.decode("utf-8"), + "name": _decode_name(m.name), "buffer": m.buffer } for m in metadata] @@ -157,8 +179,8 @@ def unpack_buffers(model, compression_metadata=None, unpacked_metadata=None): buffers = model.buffers result = [] for index, b in enumerate(buffers): - d = {"buffer": index} - d = d | {"bytes": len(b.data) if b.data is not None else 0} + d = {"_buffer": index} + d = d | {"_bytes": len(b.data) if b.data is not None else 0} d = d | {"data": unpack_array(b.data)} if index == compression_metadata: if unpacked_metadata is not None: @@ -184,12 +206,20 @@ def get_compression_metadata_buffer(model): if model.metadata is None: return None for item in model.metadata: - if item.name.decode("utf-8") == "COMPRESSION_METADATA": + if _decode_name(item.name) == "COMPRESSION_METADATA": return item.buffer return None -def print_model(model, format=None): +def create_dictionary(flatbuffer: memoryview) -> dict: + """Returns a human-readable dictionary from the provided model flatbuffer. + + This function transforms a .tflite model flatbuffer into a Python dictionary. + When pretty-printed, this dictionary offers an easily interpretable view of + the model. + """ + model = tflite_schema.ModelT.InitFromPackedBuf(flatbuffer, 0) + comp_metadata_index = get_compression_metadata_buffer(model) comp_metadata_unpacked = None if comp_metadata_index is not None: @@ -201,30 +231,33 @@ def print_model(model, format=None): output = { "description": - model.description.decode("utf-8"), + model.description, "version": - model.version, + model.version, "operator_codes": - unpack_list(model.operatorCodes), + unpack_list(model.operatorCodes), "metadata": - unpack_metadata(model.metadata), + unpack_metadata(model.metadata), "subgraphs": - unpack_subgraphs(model.subgraphs), + unpack_subgraphs(model.subgraphs), "buffers": - unpack_buffers(model, comp_metadata_index, comp_metadata_unpacked), + unpack_buffers(model, comp_metadata_index, comp_metadata_unpacked), } - pprint.pprint(output, width=90, sort_dicts=False, compact=True) + return output def main(argv): path = argv[1] - with open(path, 'rb') as file: - model = tflite_schema.ModelT.InitFromPackedBuf(file.read(), 0) - - print_model(model) + with open(path, 'rb') as flatbuffer: + d = create_dictionary(memoryview(flatbuffer.read())) + pprint.pprint(d, width=90, sort_dicts=False, compact=True) if __name__ == "__main__": + name = os.path.basename(sys.argv[0]) + usage = textwrap.dedent(f"""\ + Usage: {name} + Print a visualization of a .tflite model.""") + sys.modules['__main__'].__doc__ = usage absl.app.run(main) - sys.exit(rc) diff --git a/tensorflow/lite/micro/compression/view_test.py b/tensorflow/lite/micro/compression/view_test.py new file mode 100644 index 00000000000..47c02cfe5ea --- /dev/null +++ b/tensorflow/lite/micro/compression/view_test.py @@ -0,0 +1,88 @@ +# Copyright 2024 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from absl.testing import absltest + +import test_models +import view + +_MODEL = { + "description": "Test model", + "operator_codes": { + 0: { + "builtin_code": 0, + }, + 1: { + "builtin_code": 1, + }, + }, + "subgraphs": { + 0: { + "operators": { + 0: { + "opcode_index": 1, + "inputs": ( + 0, + 1, + ), + "outputs": (3, ), + }, + 1: { + "opcode_index": 0, + "inputs": ( + 3, + 2, + ), + "outputs": (4, ), + }, + }, + "tensors": { + 0: { + "shape": (16, 1), + "type": 1, + "buffer": 1, + }, + 1: { + "shape": (16, 1), + "type": 1, + "buffer": 1, + }, + }, + }, + }, + "buffers": { + 0: bytes(), + 1: bytes(i for i in range(1, 16)), + } +} + + +class UnitTests(absltest.TestCase): + + def testHelloWorld(self): + self.assertTrue(True) + + def testSmokeTest(self): + flatbuffer = test_models.build(_MODEL) + view.create_dictionary(memoryview(flatbuffer)) + + def testStrippedDescription(self): + stripped = _MODEL.copy() + del stripped["description"] + flatbuffer = test_models.build(stripped) + view.create_dictionary(memoryview(flatbuffer)) + + +if __name__ == "__main__": + absltest.main() diff --git a/tensorflow/lite/micro/docs/compression.md b/tensorflow/lite/micro/docs/compression.md index 2fc15281554..267bdfb1f7b 100644 --- a/tensorflow/lite/micro/docs/compression.md +++ b/tensorflow/lite/micro/docs/compression.md @@ -129,7 +129,7 @@ value: 2 4 4 10 1 7 99 10 2 4 ### Per-channel Quantized Tensor Value Tables For per-channel quantized tensors, a `value table` is present for each channel. -All of the the `value tables` are concatenated together into a single contiguous +All of the `value tables` are concatenated together into a single contiguous set of values. The number of elements in each `value table` is always identical, with zero value padding added to the end of a `value table` as necessary. diff --git a/tensorflow/lite/micro/examples/hello_world/BUILD b/tensorflow/lite/micro/examples/hello_world/BUILD index 988b7dd6b62..52eb91e0e27 100644 --- a/tensorflow/lite/micro/examples/hello_world/BUILD +++ b/tensorflow/lite/micro/examples/hello_world/BUILD @@ -4,7 +4,8 @@ load("@rules_python//python:defs.bzl", "py_binary") load("@tflm_pip_deps//:requirements.bzl", "requirement") load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -13,7 +14,7 @@ package( licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "model", srcs = [ "//tensorflow/lite/micro/examples/hello_world/models:generated_hello_world_float_model_cc", @@ -23,10 +24,9 @@ cc_library( "//tensorflow/lite/micro/examples/hello_world/models:generated_hello_world_float_model_hdr", "//tensorflow/lite/micro/examples/hello_world/models:generated_hello_world_int8_model_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "hello_world_test", srcs = [ "hello_world_test.cc", diff --git a/tensorflow/lite/micro/examples/memory_footprint/BUILD b/tensorflow/lite/micro/examples/memory_footprint/BUILD index d45507be443..89efee8dfe4 100644 --- a/tensorflow/lite/micro/examples/memory_footprint/BUILD +++ b/tensorflow/lite/micro/examples/memory_footprint/BUILD @@ -1,7 +1,7 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", ) package( @@ -20,7 +20,7 @@ generate_cc_arrays( out = "models/simple_add_model_model_data.h", ) -cc_library( +tflm_cc_library( name = "simple_add_model_data", srcs = [ ":generated_simple_add_model_cc", @@ -28,7 +28,6 @@ cc_library( hdrs = [ ":generated_simple_add_model_hdr", ], - copts = micro_copts(), ) cc_binary( diff --git a/tensorflow/lite/micro/examples/micro_speech/BUILD b/tensorflow/lite/micro/examples/micro_speech/BUILD index 10d1c1052ec..d7270f7f3a7 100644 --- a/tensorflow/lite/micro/examples/micro_speech/BUILD +++ b/tensorflow/lite/micro/examples/micro_speech/BUILD @@ -2,7 +2,11 @@ # TensorFlow Lite microcontroller example. load("@rules_python//python:defs.bzl", "py_binary", "py_test") load("@tflm_pip_deps//:requirements.bzl", "requirement") -load("//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays") +load("//tensorflow/lite/micro:build_def.bzl", + "generate_cc_arrays", + "tflm_cc_library", + "tflm_cc_test", +) package( default_visibility = ["//visibility:public"], @@ -107,7 +111,7 @@ generate_cc_arrays( out = "models/audio_preprocessor_int8_model_data.h", ) -cc_library( +tflm_cc_library( name = "micro_speech_model_data", srcs = [ ":generated_micro_speech_model_cc", @@ -117,7 +121,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "audio_preprocessor_model_data", srcs = [ ":generated_audio_preprocessor_model_cc", @@ -127,7 +131,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "audio_sample_test_data_30ms", srcs = [ ":generated_no_30ms_wav_cc", @@ -139,7 +143,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "audio_sample_test_data_1000ms", srcs = [ ":generated_no_1000ms_wav_cc", @@ -155,14 +159,14 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_model_settings", hdrs = [ "micro_model_settings.h", ], ) -cc_test( +tflm_cc_test( name = "micro_speech_test", srcs = [ "micro_speech_test.cc", diff --git a/tensorflow/lite/micro/examples/person_detection/BUILD b/tensorflow/lite/micro/examples/person_detection/BUILD index 449c4be11da..cfbbd3380aa 100644 --- a/tensorflow/lite/micro/examples/person_detection/BUILD +++ b/tensorflow/lite/micro/examples/person_detection/BUILD @@ -1,6 +1,11 @@ # Description: # TensorFlow Lite for Microcontrollers Vision Example. -load("//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays") +load("//tensorflow/lite/micro:build_def.bzl", + "generate_cc_arrays", + "tflm_cc_binary", + "tflm_cc_library", + "tflm_cc_test", +) package( default_visibility = ["//visibility:public"], @@ -9,7 +14,7 @@ package( licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "model_settings", srcs = [ "model_settings.cc", @@ -43,7 +48,7 @@ generate_cc_arrays( out = "testdata/person_image_data.h", ) -cc_library( +tflm_cc_library( name = "person_detect_model_data", srcs = [ "//tensorflow/lite/micro/models:generated_person_detect_model_cc", @@ -53,7 +58,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "simple_images_test_data", srcs = [ ":generated_no_person_bmp_cc", @@ -68,7 +73,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "person_detection_test", srcs = ["person_detection_test.cc"], deps = [ @@ -84,7 +89,7 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "image_provider", srcs = [ "image_provider.cc", @@ -98,7 +103,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "image_provider_test", srcs = [ "image_provider_test.cc", @@ -111,7 +116,7 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "detection_responder", srcs = [ "detection_responder.cc", @@ -125,7 +130,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "detection_responder_test", srcs = [ "detection_responder_test.cc", @@ -136,7 +141,7 @@ cc_test( ], ) -cc_binary( +tflm_cc_binary( name = "person_detection", srcs = [ "main.cc", diff --git a/tensorflow/lite/micro/hexdump.cc b/tensorflow/lite/micro/hexdump.cc new file mode 100644 index 00000000000..fd0f6f7c84e --- /dev/null +++ b/tensorflow/lite/micro/hexdump.cc @@ -0,0 +1,103 @@ +// Copyright 2024 The TensorFlow Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "tensorflow/lite/micro/hexdump.h" + +#include +#include + +#include "tensorflow/lite/micro/debug_log.h" +#include "tensorflow/lite/micro/static_vector.h" + +namespace { + +tflite::Span output(const tflite::Span& buf, const char* format, + ...) { + // Writes formatted output, printf-style, to either a buffer or DebugLog. + // Writes to DebugLog if the buffer data pointer is null. Does not exceed + // the size of the buffer. Returns the unused remainder of the buffer, or a + // buffer with a null data pointer in the case of printing to DebugLog. + + tflite::Span result{nullptr, 0}; + + va_list args; + va_start(args, format); + + if (buf.data() == nullptr) { + DebugLog(format, args); + result = {nullptr, 0}; + } else { + size_t len = DebugVsnprintf(buf.data(), buf.size(), format, args); + // Returns the number of characters that would have been written if + // there were enough room, so cap it at the size of the buffer in order to + // know how much was actually written. + size_t consumed = std::min(len, buf.size()); + result = {buf.data() + consumed, buf.size() - consumed}; + } + + va_end(args); + return result; +} + +} // end anonymous namespace + +tflite::Span tflite::hexdump(const tflite::Span region, + const tflite::Span out) { + tflite::Span buffer{out}; + std::size_t byte_nr = 0; + constexpr int per_line = 16; + const int lines = (region.size() + per_line - 1) / per_line; // round up + + for (int line = 0; line < lines; ++line) { + tflite::StaticVector ascii; + + // print address + buffer = output(buffer, "%08X:", line); + + for (int pos = 0; pos < per_line; ++pos) { + if (byte_nr < region.size()) { + // print byte + int as_int = static_cast(region[byte_nr++]); + buffer = output(buffer, " %02X", as_int); + + // buffer an ascii printable value + char c{'.'}; + if (std::isprint(as_int)) { + c = static_cast(as_int); + } + ascii.push_back(c); + } else { + buffer = output(buffer, " "); + } + + // print extra space in middle of the line + if (pos == per_line / 2 - 1) { + buffer = output(buffer, " "); + } + } + + // print the ascii value + buffer = output(buffer, " "); + for (const auto& c : ascii) { + buffer = output(buffer, "%c", c); + } + buffer = output(buffer, "%c", '\n'); + } + + return {out.data(), out.size() - buffer.size()}; +} + +void tflite::hexdump(const tflite::Span region) { + hexdump(region, {nullptr, 0}); +} diff --git a/tensorflow/lite/micro/hexdump.h b/tensorflow/lite/micro/hexdump.h new file mode 100644 index 00000000000..0bdfcc47c05 --- /dev/null +++ b/tensorflow/lite/micro/hexdump.h @@ -0,0 +1,35 @@ +// Copyright 2024 The TensorFlow Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#ifndef TENSORFLOW_LITE_MICRO_HEXDUMP_H_ +#define TENSORFLOW_LITE_MICRO_HEXDUMP_H_ + +#include + +#include "tensorflow/lite/micro/span.h" + +namespace tflite { + +// Displays the contents of a memory region, formatted in hexadecimal and ASCII +// in a style matching Python's hexdump module, using DebugLog(). +void hexdump(Span region); + +// Writes the contents of a memory region, formatted in hexadecimal and ASCII +// in a style matching Python's hexdump module, to a buffer. Returns the portion +// of the buffer written. +Span hexdump(Span region, Span buffer); + +} // end namespace tflite + +#endif // TENSORFLOW_LITE_MICRO_HEXDUMP_H_ diff --git a/tensorflow/lite/micro/hexdump_test.cc b/tensorflow/lite/micro/hexdump_test.cc new file mode 100644 index 00000000000..89d3a0404c5 --- /dev/null +++ b/tensorflow/lite/micro/hexdump_test.cc @@ -0,0 +1,58 @@ +// Copyright 2024 The TensorFlow Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "tensorflow/lite/micro/hexdump.h" + +#include + +#include "tensorflow/lite/micro/span.h" +#include "tensorflow/lite/micro/testing/micro_test.h" + +constexpr tflite::Span input{ + "This is an input string for testing."}; + +const tflite::Span region{ + reinterpret_cast(input.data()), input.size()}; + +// clang-format off +constexpr tflite::Span expected{ + "00000000: 54 68 69 73 20 69 73 20 61 6E 20 69 6E 70 75 74 This is an input\n" + "00000001: 20 73 74 72 69 6E 67 20 66 6F 72 20 74 65 73 74 string for test\n" + "00000002: 69 6E 67 2E 00 ing..\n"}; +// clang-format on + +// String literals have null terminators, but don't expect a null terminator +// in the hexdump output. +constexpr tflite::Span expected_no_null{expected.data(), + expected.size() - 1}; + +TF_LITE_MICRO_TESTS_BEGIN + +TF_LITE_MICRO_TEST(TestOutputToBuffer) { + // Allocate a buffer with an arbitrary amount of extra room so the test has + // the possibility of failing if hexdump mishandles the extra space. + std::array buffer; + + tflite::Span output = tflite::hexdump(region, buffer); + TF_LITE_MICRO_EXPECT(output == expected_no_null); +} + +TF_LITE_MICRO_TEST(TestOutputToDebugLog) { + // There's no easy way to verify DebugLog output; however, test it anyhow to + // catch an outright crash, and so the output appears in the log should + // someone wish to examine it. + tflite::hexdump(region); +} + +TF_LITE_MICRO_TESTS_END diff --git a/tensorflow/lite/micro/integration_tests/seanet/add/BUILD b/tensorflow/lite/micro/integration_tests/seanet/add/BUILD index fecc94a60d9..4d1fe9b8af8 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/add/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/add/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -829,7 +830,7 @@ generate_cc_arrays( out = "add16_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_add0_golden_int16_test_data_cc", @@ -971,15 +972,13 @@ cc_library( "generated_add9_input1_int16_test_data_hdr", "generated_add9_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/conv/BUILD b/tensorflow/lite/micro/integration_tests/seanet/conv/BUILD index 1170d3c99ba..533cbe59268 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/conv/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/conv/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -805,7 +806,7 @@ generate_cc_arrays( out = "conv21_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_conv0_golden_int16_test_data_cc", @@ -943,15 +944,13 @@ cc_library( "generated_conv9_input0_int16_test_data_hdr", "generated_conv9_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/leaky_relu/BUILD b/tensorflow/lite/micro/integration_tests/seanet/leaky_relu/BUILD index 597aa7e4bc6..3d0761c27c5 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/leaky_relu/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/leaky_relu/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -841,7 +842,7 @@ generate_cc_arrays( out = "leaky_relu22_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_leaky_relu0_golden_int16_test_data_cc", @@ -985,15 +986,13 @@ cc_library( "generated_leaky_relu9_input0_int16_test_data_hdr", "generated_leaky_relu9_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/pad/BUILD b/tensorflow/lite/micro/integration_tests/seanet/pad/BUILD index 1aadbd57a34..cc7303f39c3 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/pad/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/pad/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -697,7 +698,7 @@ generate_cc_arrays( out = "pad18_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_pad0_golden_int16_test_data_cc", @@ -817,15 +818,13 @@ cc_library( "generated_pad9_input0_int16_test_data_hdr", "generated_pad9_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/quantize/BUILD b/tensorflow/lite/micro/integration_tests/seanet/quantize/BUILD index 58217b05fb9..b7a001c9eec 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/quantize/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/quantize/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -85,7 +86,7 @@ generate_cc_arrays( out = "quantize1_golden_int32_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_quantize0_golden_int16_test_data_cc", @@ -103,15 +104,13 @@ cc_library( "generated_quantize1_input0_int16_test_data_hdr", "generated_quantize1_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/strided_slice/BUILD b/tensorflow/lite/micro/integration_tests/seanet/strided_slice/BUILD index a70568c0803..5e446ead4c0 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/strided_slice/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/strided_slice/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -1237,7 +1238,7 @@ generate_cc_arrays( out = "strided_slice33_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_strided_slice0_golden_int16_test_data_cc", @@ -1447,15 +1448,13 @@ cc_library( "generated_strided_slice9_input0_int16_test_data_hdr", "generated_strided_slice9_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/sub/BUILD b/tensorflow/lite/micro/integration_tests/seanet/sub/BUILD index a7e7f1c2215..4a61e8fc08d 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/sub/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/sub/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -253,7 +254,7 @@ generate_cc_arrays( out = "sub4_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_sub0_golden_int16_test_data_cc", @@ -299,15 +300,13 @@ cc_library( "generated_sub4_input1_int16_test_data_hdr", "generated_sub4_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/seanet/transpose_conv/BUILD b/tensorflow/lite/micro/integration_tests/seanet/transpose_conv/BUILD index 174596d09fe..872d99357f1 100644 --- a/tensorflow/lite/micro/integration_tests/seanet/transpose_conv/BUILD +++ b/tensorflow/lite/micro/integration_tests/seanet/transpose_conv/BUILD @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -253,7 +254,7 @@ generate_cc_arrays( out = "transpose_conv4_golden_int16_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_transpose_conv0_golden_int16_test_data_cc", @@ -299,15 +300,13 @@ cc_library( "generated_transpose_conv4_input1_int16_test_data_hdr", "generated_transpose_conv4_model_data_hdr", ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/integration_tests/templates/BUILD.mako b/tensorflow/lite/micro/integration_tests/templates/BUILD.mako index 7f5096573a7..532186ab0a9 100644 --- a/tensorflow/lite/micro/integration_tests/templates/BUILD.mako +++ b/tensorflow/lite/micro/integration_tests/templates/BUILD.mako @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_cc_library", + "tflm_cc_test", ) package( @@ -46,7 +47,7 @@ generate_cc_arrays( ) % endfor -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ % for target in targets: @@ -66,15 +67,13 @@ cc_library( "generated_${target}_golden_${output_dtype}_test_data_hdr", % endfor ], - copts = micro_copts(), ) -cc_test( +tflm_cc_test( name = "integration_test", srcs = [ "integration_tests.cc", ], - copts = micro_copts(), deps = [ ":models_and_testdata", "//tensorflow/lite/micro:micro_framework", diff --git a/tensorflow/lite/micro/kernels/BUILD b/tensorflow/lite/micro/kernels/BUILD index 29a369eda33..31c695336c5 100644 --- a/tensorflow/lite/micro/kernels/BUILD +++ b/tensorflow/lite/micro/kernels/BUILD @@ -1,5 +1,10 @@ load("@bazel_skylib//rules:common_settings.bzl", "string_flag") -load("//tensorflow/lite/micro:build_def.bzl", "micro_copts", "tflm_kernel_cc_library") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_cc_test", + "tflm_copts", + "tflm_kernel_cc_library", +) load( "//tensorflow:extra_rules.bzl", "tflm_kernel_friends", @@ -37,17 +42,16 @@ package_group( # C++ libraries #################################### -cc_library( +tflm_cc_library( name = "activation_utils", hdrs = ["activation_utils.h"], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/kernels/internal:cppmath", ], ) -cc_library( +tflm_cc_library( name = "circular_buffer_flexbuffers_generated_data", srcs = [ "circular_buffer_flexbuffers_generated_data.cc", @@ -55,10 +59,9 @@ cc_library( hdrs = [ "circular_buffer_flexbuffers_generated_data.h", ], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( name = "conv_test_common", srcs = [ "conv_test_common.cc", @@ -66,7 +69,6 @@ cc_library( hdrs = [ "conv_test.h", ], - copts = micro_copts(), deps = [ ":kernel_runner", ":micro_ops", @@ -76,7 +78,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "detection_postprocess_flexbuffers_generated_data", srcs = [ "detection_postprocess_flexbuffers_generated_data.cc", @@ -84,16 +86,14 @@ cc_library( hdrs = [ "detection_postprocess_flexbuffers_generated_data.h", ], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( name = "kernel_runner", srcs = [ "kernel_runner.cc", ], hdrs = ["kernel_runner.h"], - copts = micro_copts(), visibility = [ "//visibility:public", ], @@ -108,13 +108,12 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "kernel_util", srcs = [ "kernel_util.cc", ], hdrs = ["kernel_util.h"], - copts = micro_copts(), visibility = [ ":kernel_friends", ":tflite_micro", @@ -129,21 +128,19 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "lstm_shared", hdrs = [ "lstm_shared.h", ], - copts = micro_copts(), visibility = ["//tensorflow/lite/micro/kernels/testdata:__pkg__"], ) -cc_library( +tflm_cc_library( name = "lstm_eval_test_lib", hdrs = [ "lstm_eval_test.h", ], - copts = micro_copts(), deps = [ ":kernel_util", ":micro_ops", @@ -153,13 +150,12 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_tensor_utils", srcs = [ "micro_tensor_utils.cc", ], hdrs = ["micro_tensor_utils.h"], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/core:macros", @@ -341,7 +337,7 @@ tflm_kernel_cc_library( xtensa_hifi_5_config(): glob(["xtensa/**/*.cc"]), xtensa_vision_p6_config(): glob(["xtensa/**/*.cc"]), }, - copts = micro_copts() + select({ + copts = tflm_copts() + select({ xtensa_fusion_f1_config(): HIFI4_COPTS, xtensa_hifi_3_config(): HIFI3_COPTS, xtensa_hifi_3z_config(): HIFI4_COPTS, @@ -391,7 +387,7 @@ tflm_kernel_cc_library( # C++ tests #################################### -cc_test( +tflm_cc_test( name = "activations_test", srcs = [ "activations_test.cc", @@ -405,7 +401,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "add_n_test", srcs = [ "add_n_test.cc", @@ -420,7 +416,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "add_test", srcs = [ "add_test.cc", @@ -434,7 +430,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "arg_min_max_test", srcs = [ "arg_min_max_test.cc", @@ -448,7 +444,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "batch_matmul_test", srcs = [ "batch_matmul_test.cc", @@ -462,7 +458,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "batch_to_space_nd_test", srcs = [ "batch_to_space_nd_test.cc", @@ -476,7 +472,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "broadcast_args_test", srcs = [ "broadcast_args_test.cc", @@ -490,7 +486,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "broadcast_to_test", srcs = [ "broadcast_to_test.cc", @@ -504,7 +500,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "call_once_test", srcs = ["call_once_test.cc"], deps = [ @@ -516,7 +512,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "cast_test", srcs = ["cast_test.cc"], deps = [ @@ -529,7 +525,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "ceil_test", srcs = [ "ceil_test.cc", @@ -543,7 +539,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "circular_buffer_test", srcs = [ "circular_buffer_test.cc", @@ -559,7 +555,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "comparisons_test", srcs = [ "comparisons_test.cc", @@ -572,7 +568,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "concatenation_test", srcs = [ "concatenation_test.cc", @@ -585,7 +581,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "conv_test", srcs = [ "conv_test.cc", @@ -601,7 +597,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "cumsum_test", srcs = [ "cumsum_test.cc", @@ -616,7 +612,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "depth_to_space_test", srcs = [ "depth_to_space_test.cc", @@ -631,7 +627,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "depthwise_conv_test", srcs = [ "depthwise_conv_test.cc", @@ -645,7 +641,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "dequantize_test", srcs = [ "dequantize_test.cc", @@ -658,7 +654,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "detection_postprocess_test", srcs = [ "detection_postprocess_test.cc", @@ -674,7 +670,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "div_test", srcs = [ "div_test.cc", @@ -687,7 +683,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "elementwise_test", srcs = ["elementwise_test.cc"], deps = [ @@ -700,7 +696,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "elu_test", srcs = [ "elu_test.cc", @@ -715,7 +711,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "embedding_lookup_test", srcs = [ "embedding_lookup_test.cc", @@ -730,7 +726,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "exp_test", srcs = ["exp_test.cc"], deps = [ @@ -743,7 +739,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "expand_dims_test", srcs = ["expand_dims_test.cc"], deps = [ @@ -756,7 +752,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "fill_test", srcs = [ "fill_test.cc", @@ -770,7 +766,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "floor_div_test", srcs = ["floor_div_test.cc"], deps = [ @@ -783,7 +779,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "floor_mod_test", srcs = ["floor_mod_test.cc"], deps = [ @@ -796,7 +792,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "floor_test", srcs = [ "floor_test.cc", @@ -810,7 +806,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "fully_connected_test", srcs = [ "fully_connected_test.cc", @@ -825,7 +821,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "gather_test", srcs = [ "gather_test.cc", @@ -840,7 +836,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "gather_nd_test", srcs = [ "gather_nd_test.cc", @@ -855,7 +851,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "hard_swish_test", srcs = ["hard_swish_test.cc"], deps = [ @@ -867,7 +863,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "if_test", srcs = ["if_test.cc"], deps = [ @@ -881,7 +877,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "l2norm_test", srcs = [ "l2norm_test.cc", @@ -895,7 +891,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "l2_pool_2d_test", srcs = [ "l2_pool_2d_test.cc", @@ -910,7 +906,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "leaky_relu_test", srcs = [ "leaky_relu_test.cc", @@ -925,7 +921,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "logical_test", srcs = [ "logical_test.cc", @@ -939,7 +935,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "logistic_test", srcs = [ "logistic_test.cc", @@ -953,7 +949,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "log_softmax_test", srcs = [ "log_softmax_test.cc", @@ -968,7 +964,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "lstm_eval_test", srcs = [ "lstm_eval_test.cc", @@ -980,7 +976,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "maximum_minimum_test", srcs = [ "maximum_minimum_test.cc", @@ -994,7 +990,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "mirror_pad_test", srcs = [ "mirror_pad_test.cc", @@ -1008,7 +1004,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "mul_test", srcs = [ "mul_test.cc", @@ -1021,7 +1017,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "neg_test", srcs = [ "neg_test.cc", @@ -1035,7 +1031,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "pack_test", srcs = [ "pack_test.cc", @@ -1049,7 +1045,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "pad_test", srcs = [ "pad_test.cc", @@ -1067,7 +1063,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "pooling_test", srcs = [ "pooling_test.cc", @@ -1080,7 +1076,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "prelu_test", srcs = [ "prelu_test.cc", @@ -1093,7 +1089,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "quantization_util_test", srcs = [ "quantization_util_test.cc", @@ -1105,7 +1101,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "quantize_test", srcs = [ "quantize_test.cc", @@ -1118,7 +1114,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "reduce_test", srcs = [ "reduce_test.cc", @@ -1132,7 +1128,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "reshape_test", srcs = [ "reshape_test.cc", @@ -1147,7 +1143,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "resize_bilinear_test", srcs = [ "resize_bilinear_test.cc", @@ -1161,7 +1157,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "resize_nearest_neighbor_test", srcs = [ "resize_nearest_neighbor_test.cc", @@ -1175,7 +1171,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "round_test", srcs = [ "round_test.cc", @@ -1189,7 +1185,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "select_test", srcs = [ "select_test.cc", @@ -1203,7 +1199,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "shape_test", srcs = ["shape_test.cc"], deps = [ @@ -1215,7 +1211,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "slice_test", srcs = ["slice_test.cc"], deps = [ @@ -1227,7 +1223,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "softmax_test", srcs = [ "softmax_test.cc", @@ -1241,7 +1237,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "space_to_batch_nd_test", srcs = [ "space_to_batch_nd_test.cc", @@ -1255,7 +1251,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "space_to_depth_test", srcs = [ "space_to_depth_test.cc", @@ -1270,7 +1266,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "split_test", srcs = [ "split_test.cc", @@ -1285,7 +1281,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "split_v_test", srcs = [ "split_v_test.cc", @@ -1300,7 +1296,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "squared_difference_test", srcs = [ "squared_difference_test.cc", @@ -1313,7 +1309,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "squeeze_test", srcs = ["squeeze_test.cc"], deps = [ @@ -1325,7 +1321,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "strided_slice_test", srcs = [ "strided_slice_test.cc", @@ -1339,7 +1335,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "sub_test", srcs = [ "sub_test.cc", @@ -1352,7 +1348,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "svdf_test", srcs = [ "svdf_test.cc", @@ -1365,7 +1361,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "tanh_test", srcs = ["tanh_test.cc"], deps = [ @@ -1376,7 +1372,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "transpose_test", srcs = ["transpose_test.cc"], deps = [ @@ -1387,7 +1383,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "transpose_conv_test", srcs = [ "transpose_conv_test.cc", @@ -1403,7 +1399,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "unidirectional_sequence_lstm_test", srcs = [ "unidirectional_sequence_lstm_test.cc", @@ -1420,7 +1416,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "unpack_test", srcs = [ "unpack_test.cc", @@ -1434,7 +1430,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "while_test", srcs = [ "while_test.cc", @@ -1448,7 +1444,7 @@ cc_test( ], ) -cc_test( +tflm_cc_test( name = "zeros_like_test", srcs = ["zeros_like_test.cc"], deps = [ diff --git a/tensorflow/lite/micro/kernels/testdata/BUILD b/tensorflow/lite/micro/kernels/testdata/BUILD index 0c7822d3dd6..2d0e7511145 100644 --- a/tensorflow/lite/micro/kernels/testdata/BUILD +++ b/tensorflow/lite/micro/kernels/testdata/BUILD @@ -1,5 +1,8 @@ load("@rules_python//python:defs.bzl", "py_binary", "py_test") load("@tflm_pip_deps//:requirements.bzl", "requirement") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", +) package( default_visibility = ["//tensorflow/lite/micro/kernels:__pkg__"], @@ -12,14 +15,14 @@ package( # C++ libraries #################################### -cc_library( +tflm_cc_library( name = "conv_test_data", srcs = ["conv_test_data.cc"], hdrs = ["conv_test_data.h"], deps = ["//tensorflow/lite/c:common"], ) -cc_library( +tflm_cc_library( name = "lstm_test_data", srcs = ["lstm_test_data.cc"], hdrs = [ diff --git a/tensorflow/lite/micro/memory_arena_threshold_test.cc b/tensorflow/lite/micro/memory_arena_threshold_test.cc index 2f9242781f8..19d0131d908 100644 --- a/tensorflow/lite/micro/memory_arena_threshold_test.cc +++ b/tensorflow/lite/micro/memory_arena_threshold_test.cc @@ -63,7 +63,6 @@ constexpr int kKeywordModelOnlyTotalSize = 14472; // TODO(b/207157610): replace magic number that depends on OPs constexpr int kKeywordModelOnlyTailSize = 13800; constexpr int kKeywordModelPersistentTfLiteTensorDataSize = 128; -constexpr int kKeywordModelPersistentBufferDataSize = 832; #else // Total size contributed by the keyword model excluding the // RecordingMicroAllocator's overhead. @@ -74,7 +73,6 @@ constexpr int kKeywordModelOnlyTotalSize = 14936; // TODO(b/207157610): replace magic number that depends on OPs constexpr int kKeywordModelOnlyTailSize = 14264; constexpr int kKeywordModelPersistentTfLiteTensorDataSize = 224; -constexpr int kKeywordModelPersistentBufferDataSize = 840; #endif constexpr int kKeywordModelHeadSize = 672; constexpr int kKeywordModelTfLiteTensorVariableBufferDataSize = 10240; @@ -87,6 +85,13 @@ uint8_t test_conv_tensor_arena[kTestConvModelArenaSize]; constexpr int kTestConvModelTensorCount = 15; constexpr int kTestConvModelNodeAndRegistrationCount = 7; +constexpr int kKeywordModelPersistentBufferDataSize = +#if defined(USE_TFLM_COMPRESSION) + 920; +#else + 840; +#endif + // NOTE: These values are measured on x86-64: // TODO(b/158651472): Consider auditing these values on non-64 bit systems. #ifdef TF_LITE_STATIC_MEMORY @@ -136,10 +141,6 @@ void EnsureAllocatedSizeThreshold(const char* allocation_type, size_t actual, // 64-bit systems should check floor and ceiling to catch memory savings: TF_LITE_MICRO_EXPECT_NEAR(actual, expected, expected * kAllocationThreshold); - if (actual != expected) { - MicroPrintf("%s threshold failed: %d != %d", allocation_type, actual, - expected); - } } else { // Non-64 bit systems should just expect allocation does not exceed the // ceiling: diff --git a/tensorflow/lite/micro/memory_planner/BUILD b/tensorflow/lite/micro/memory_planner/BUILD index 7111da4e1c2..8fa3774dd80 100644 --- a/tensorflow/lite/micro/memory_planner/BUILD +++ b/tensorflow/lite/micro/memory_planner/BUILD @@ -1,6 +1,6 @@ load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_library", ) package( @@ -10,19 +10,18 @@ package( licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "micro_memory_planner", hdrs = [ "micro_memory_planner.h", ], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/core/api", ], ) -cc_library( +tflm_cc_library( name = "linear_memory_planner", srcs = [ "linear_memory_planner.cc", @@ -30,7 +29,6 @@ cc_library( hdrs = [ "linear_memory_planner.h", ], - copts = micro_copts(), deps = [ ":micro_memory_planner", "//tensorflow/lite/c:common", @@ -39,7 +37,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "greedy_memory_planner", srcs = [ "greedy_memory_planner.cc", @@ -47,7 +45,6 @@ cc_library( hdrs = [ "greedy_memory_planner.h", ], - copts = micro_copts(), deps = [ ":micro_memory_planner", "//tensorflow/lite/micro:micro_compatibility", @@ -77,12 +74,11 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "memory_plan_struct", hdrs = [ "memory_plan_struct.h", ], - copts = micro_copts(), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite/core/api", @@ -91,13 +87,12 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "non_persistent_buffer_planner_shim", srcs = ["non_persistent_buffer_planner_shim.cc"], hdrs = [ "non_persistent_buffer_planner_shim.h", ], - copts = micro_copts(), deps = [ ":memory_plan_struct", ":micro_memory_planner", diff --git a/tensorflow/lite/micro/micro_allocator.cc b/tensorflow/lite/micro/micro_allocator.cc index c83a009b4b6..08203285f4c 100644 --- a/tensorflow/lite/micro/micro_allocator.cc +++ b/tensorflow/lite/micro/micro_allocator.cc @@ -41,7 +41,7 @@ limitations under the License. #include #include -#include "tensorflow/lite/micro/compression/metadata_generated.h" +#include "tensorflow/lite/micro/compression/metadata_saved.h" #endif // USE_TFLM_COMPRESSION @@ -74,10 +74,10 @@ class MicroBuiltinDataAllocator : public TfLiteBridgeBuiltinDataAllocator { // of the model. } - TF_LITE_REMOVE_VIRTUAL_DELETE - private: IPersistentBufferAllocator* persistent_allocator_; + + TF_LITE_REMOVE_VIRTUAL_DELETE }; MicroMemoryPlanner* CreateMemoryPlanner( diff --git a/tensorflow/lite/micro/micro_mutable_op_resolver.h b/tensorflow/lite/micro/micro_mutable_op_resolver.h index f5f6e38e003..3ec00a6b614 100644 --- a/tensorflow/lite/micro/micro_mutable_op_resolver.h +++ b/tensorflow/lite/micro/micro_mutable_op_resolver.h @@ -44,8 +44,6 @@ TFLMRegistration* Register_DETECTION_POSTPROCESS(); template class MicroMutableOpResolver : public MicroOpResolver { public: - TF_LITE_REMOVE_VIRTUAL_DELETE - explicit MicroMutableOpResolver() {} const TFLMRegistration* FindOp(tflite::BuiltinOperator op) const override { @@ -704,6 +702,8 @@ class MicroMutableOpResolver : public MicroOpResolver { BuiltinOperator builtin_codes_[tOpCount]; TfLiteBridgeBuiltinParseFunction builtin_parsers_[tOpCount]; unsigned int num_buitin_ops_ = 0; + + TF_LITE_REMOVE_VIRTUAL_DELETE }; }; // namespace tflite diff --git a/tensorflow/lite/micro/python/interpreter/src/BUILD b/tensorflow/lite/micro/python/interpreter/src/BUILD index f8be0ed253a..b330c3566b3 100644 --- a/tensorflow/lite/micro/python/interpreter/src/BUILD +++ b/tensorflow/lite/micro/python/interpreter/src/BUILD @@ -1,6 +1,6 @@ load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_library", ) load( "//tensorflow:extra_rules.bzl", @@ -18,13 +18,12 @@ package_group( ) # TODO(b/286456378): remove once all internal usage is fixed. -cc_library( +tflm_cc_library( name = "python_ops_resolver", srcs = [], hdrs = [ "python_ops_resolver.h", ], - copts = micro_copts(), visibility = [":op_resolver_friends"], deps = [ "//python/tflite_micro:python_ops_resolver", diff --git a/tensorflow/lite/micro/test_helpers.cc b/tensorflow/lite/micro/test_helpers.cc index 2c31552e88e..63010ab2ffa 100644 --- a/tensorflow/lite/micro/test_helpers.cc +++ b/tensorflow/lite/micro/test_helpers.cc @@ -36,7 +36,7 @@ limitations under the License. #ifdef USE_TFLM_COMPRESSION -#include "tensorflow/lite/micro/compression/metadata_generated.h" +#include "tensorflow/lite/micro/compression/metadata_saved.h" #endif // USE_TFLM_COMPRESSION diff --git a/tensorflow/lite/micro/testing/BUILD b/tensorflow/lite/micro/testing/BUILD index 7a246990a23..d55975774ed 100644 --- a/tensorflow/lite/micro/testing/BUILD +++ b/tensorflow/lite/micro/testing/BUILD @@ -1,5 +1,9 @@ load("@rules_python//python:defs.bzl", "py_binary", "py_library") load("@tflm_pip_deps//:requirements.bzl", "requirement") +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_cc_test", +) load( "//tensorflow:extra_rules.bzl", "tflm_kernel_friends", @@ -26,7 +30,7 @@ package_group( packages = tflm_kernel_friends(), ) -cc_library( +tflm_cc_library( name = "micro_test", hdrs = [ "micro_test.h", @@ -46,7 +50,7 @@ cc_library( ], ) -cc_test( +tflm_cc_test( name = "util_test", srcs = [ "util_test.cc", @@ -56,7 +60,7 @@ cc_test( ], ) -cc_library( +tflm_cc_library( name = "test_conv_model", srcs = [ "test_conv_model.cc", diff --git a/tensorflow/lite/micro/tflite_bridge/BUILD b/tensorflow/lite/micro/tflite_bridge/BUILD index ea5efdf6a8f..0c8ba0a7764 100644 --- a/tensorflow/lite/micro/tflite_bridge/BUILD +++ b/tensorflow/lite/micro/tflite_bridge/BUILD @@ -1,6 +1,6 @@ load( "//tensorflow/lite/micro:build_def.bzl", - "micro_copts", + "tflm_cc_library", ) package( @@ -9,7 +9,7 @@ package( licenses = ["notice"], ) -cc_library( +tflm_cc_library( name = "flatbuffer_conversions_bridge", srcs = [ "flatbuffer_conversions_bridge.cc", @@ -17,7 +17,6 @@ cc_library( hdrs = [ "flatbuffer_conversions_bridge.h", ], - copts = micro_copts(), visibility = [ "//tensorflow/lite/micro:__pkg__", ], @@ -29,7 +28,7 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "micro_error_reporter", srcs = [ "micro_error_reporter.cc", @@ -37,7 +36,6 @@ cc_library( hdrs = [ "micro_error_reporter.h", ], - copts = micro_copts(), deps = [ "//tensorflow/lite/core/api:error_reporter", "//tensorflow/lite/micro:micro_compatibility", diff --git a/tensorflow/lite/micro/tools/benchmarking/BUILD b/tensorflow/lite/micro/tools/benchmarking/BUILD index 6691ac31814..06e09df8a94 100644 --- a/tensorflow/lite/micro/tools/benchmarking/BUILD +++ b/tensorflow/lite/micro/tools/benchmarking/BUILD @@ -1,11 +1,17 @@ -cc_library( +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_binary", + "tflm_cc_library", + "tflm_defines", +) + +tflm_cc_library( name = "op_resolver", hdrs = ["op_resolver.h"], visibility = ["//tensorflow/lite/micro/tools:__subpackages__"], deps = ["//tensorflow/lite/micro:op_resolvers"], ) -cc_library( +tflm_cc_library( name = "metrics", srcs = ["metrics.cc"], hdrs = ["metrics.h"], @@ -17,11 +23,11 @@ cc_library( ], ) -cc_library( +tflm_cc_library( name = "generic_benchmark_lib", srcs = ["generic_model_benchmark.cc"], hdrs = ["show_meta_data.h"], - defines = ["GENERIC_BENCHMARK_NO_META_DATA"], + defines = tflm_defines() + ["GENERIC_BENCHMARK_NO_META_DATA"], deps = [ ":metrics", ":op_resolver", @@ -37,7 +43,7 @@ cc_library( ], ) -cc_binary( +tflm_cc_binary( name = "tflm_benchmark", deps = [":generic_benchmark_lib"], ) diff --git a/tensorflow/lite/micro/tools/ci_build/test_code_style.sh b/tensorflow/lite/micro/tools/ci_build/test_code_style.sh index e06ac891bc0..9916ebce97c 100755 --- a/tensorflow/lite/micro/tools/ci_build/test_code_style.sh +++ b/tensorflow/lite/micro/tools/ci_build/test_code_style.sh @@ -47,7 +47,6 @@ tensorflow/lite/micro/tools/make/downloads/pigweed/pw_presubmit/py/pw_presubmit/ -e python/schema_py_generated.py \ -e python_requirements.in \ -e tensorflow/lite/micro/compression/metadata_saved.h \ - -e tensorflow/lite/micro/compression/metadata_generated.h \ -e tools/make/downloads \ -e tools/make/targets/ecm3531 \ -e BUILD\ @@ -99,9 +98,8 @@ tensorflow/lite/micro/tools/make/downloads/pigweed/pw_presubmit/py/pw_presubmit/ -e experimental \ -e schema/schema_generated.h \ -e schema/schema_utils.h \ - -e tensorflow/lite/micro/tools/layer_by_layer_schema_generated.h \ -e tensorflow/lite/micro/compression/metadata_saved.h \ - -e tensorflow/lite/micro/compression/metadata_generated.h \ + -e tensorflow/lite/micro/tools/layer_by_layer_schema_generated.h \ -e "\.inc" \ -e "\.md" \ ${EXCLUDE_SHARED_TFL_CODE} diff --git a/tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver/templates/BUILD.mako b/tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver/templates/BUILD.mako index 1b8302e8b27..c6531199936 100644 --- a/tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver/templates/BUILD.mako +++ b/tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver/templates/BUILD.mako @@ -3,7 +3,8 @@ load( "//tensorflow/lite/micro:build_def.bzl", "generate_cc_arrays", - "micro_copts", + "tflm_library", + "tflm_cc_test", ) package( @@ -43,7 +44,7 @@ generate_cc_arrays( out = "${target}_golden_${output_dtype}_test_data.h", ) -cc_library( +tflm_cc_library( name = "models_and_testdata", srcs = [ "generated_${target}_model_data_cc", @@ -59,21 +60,19 @@ cc_library( "generated_${target}_golden_${output_dtype}_test_data_hdr", % endif ], - copts = micro_copts(), ) -cc_library( +tflm_cc_library( name = "gen_micro_op_resolver", hdrs = ["gen_micro_mutable_op_resolver.h",], visibility = ["//visibility:public"], ) -cc_test( +tflm_cc_test( name = "micro_mutable_op_resolver_test", srcs = [ "micro_mutable_op_resolver_test.cc", ], - copts = micro_copts(), deps = [ ":gen_micro_op_resolver", ":models_and_testdata", diff --git a/tensorflow/lite/micro/tools/make/pigweed.patch b/tensorflow/lite/micro/tools/make/pigweed.patch index d1e29304e79..9aa2044e307 100644 --- a/tensorflow/lite/micro/tools/make/pigweed.patch +++ b/tensorflow/lite/micro/tools/make/pigweed.patch @@ -14,15 +14,6 @@ diff --git a/pw_presubmit/py/pw_presubmit/format_code.py b/pw_presubmit/py/pw_pr index 19d09546..c1ff6b5a 100755 --- a/pw_presubmit/py/pw_presubmit/format_code.py +++ b/pw_presubmit/py/pw_presubmit/format_code.py -@@ -142,7 +142,7 @@ def fix_go_format(files: Iterable[Path]) -> None: - - - def _yapf(*args, **kwargs) -> subprocess.CompletedProcess: -- return log_run(['python', '-m', 'yapf', '--parallel', *args], -+ return log_run(['python', '-m', 'yapf', '--style', '{based_on_style:pep8,indent_width:2}', '--parallel', *args], - capture_output=True, - **kwargs) - @@ -229,11 +229,6 @@ def print_format_check(errors: Dict[Path, str], except ValueError: return Path(path).resolve() diff --git a/tensorflow/lite/micro/tools/project_generation/BUILD.testing b/tensorflow/lite/micro/tools/project_generation/BUILD.testing index 088b19ca127..bc7b030c936 100644 --- a/tensorflow/lite/micro/tools/project_generation/BUILD.testing +++ b/tensorflow/lite/micro/tools/project_generation/BUILD.testing @@ -1,10 +1,15 @@ # standalone BUILD file used to test project generation with bazel. +# +load("//tensorflow/lite/micro:build_def.bzl", + "tflm_cc_library", + "tflm_copts", +) -cc_library( +tflm_cc_library( name = "libtflm", srcs = glob(["tensorflow/**/*.cc", "tensorflow/**/*.c", "third_party/**/*.cc", "third_party/**/*.c"]), hdrs = glob(["tensorflow/**/*.h", "third_party/**/*.h"]), - copts = [ + copts = tflm_copts() + [ "-Ithird_party/gemmlowp", "-Ithird_party/flatbuffers/include", "-Ithird_party/kissfft", diff --git a/third_party/python_requirements.in b/third_party/python_requirements.in index 29c081e5b18..f75d931b52b 100644 --- a/third_party/python_requirements.in +++ b/third_party/python_requirements.in @@ -26,6 +26,7 @@ # is sensitive to the Python environment (interpreter version, etc.) in which # it is run. +bitarray hexdump tensorflow twine diff --git a/third_party/python_requirements.txt b/third_party/python_requirements.txt index b0d91331ae0..487dd5635a6 100644 --- a/third_party/python_requirements.txt +++ b/third_party/python_requirements.txt @@ -15,6 +15,130 @@ astunparse==1.6.3 \ --hash=sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872 \ --hash=sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8 # via tensorflow +bitarray==2.9.2 \ + --hash=sha256:03adaacb79e2fb8f483ab3a67665eec53bb3fd0cd5dbd7358741aef124688db3 \ + --hash=sha256:052c5073bdcaa9dd10628d99d37a2f33ec09364b86dd1f6281e2d9f8d3db3060 \ + --hash=sha256:0a99b23ac845a9ea3157782c97465e6ae026fe0c7c4c1ed1d88f759fd6ea52d9 \ + --hash=sha256:0b3543c8a1cb286ad105f11c25d8d0f712f41c5c55f90be39f0e5a1376c7d0b0 \ + --hash=sha256:128cc3488176145b9b137fdcf54c1c201809bbb8dd30b260ee40afe915843b43 \ + --hash=sha256:1bb33673e7f7190a65f0a940c1ef63266abdb391f4a3e544a47542d40a81f536 \ + --hash=sha256:1e0b63a565e8a311cc8348ff1262d5784df0f79d64031d546411afd5dd7ef67d \ + --hash=sha256:1e497c535f2a9b68c69d36631bf2dba243e05eb343b00b9c7bbdc8c601c6802d \ + --hash=sha256:1ff9e38356cc803e06134cf8ae9758e836ccd1b793135ef3db53c7c5d71e93bc \ + --hash=sha256:21f21e7f56206be346bdbda2a6bdb2165a5e6a11821f88fd4911c5a6bbbdc7e2 \ + --hash=sha256:2c6be1b651fad8f3adb7a5aa12c65b612cd9b89530969af941844ae680f7d981 \ + --hash=sha256:2f32948c86e0d230a296686db28191b67ed229756f84728847daa0c7ab7406e3 \ + --hash=sha256:321841cdad1dd0f58fe62e80e9c9c7531f8ebf8be93f047401e930dc47425b1e \ + --hash=sha256:345c76b349ff145549652436235c5532e5bfe9db690db6f0a6ad301c62b9ef21 \ + --hash=sha256:393cb27fd859af5fd9c16eb26b1c59b17b390ff66b3ae5d0dd258270191baf13 \ + --hash=sha256:3c4344e96642e2211fb3a50558feff682c31563a4c64529a931769d40832ca79 \ + --hash=sha256:3fa909cfd675004aed8b4cc9df352415933656e0155a6209d878b7cb615c787e \ + --hash=sha256:405b83bed28efaae6d86b6ab287c75712ead0adbfab2a1075a1b7ab47dad4d62 \ + --hash=sha256:43847799461d8ba71deb4d97b47250c2c2fb66d82cd3cb8b4caf52bb97c03034 \ + --hash=sha256:461a3dafb9d5fda0bb3385dc507d78b1984b49da3fe4c6d56c869a54373b7008 \ + --hash=sha256:48a30d718d1a6dfc22a49547450107abe8f4afdf2abdcbe76eb9ed88edc49498 \ + --hash=sha256:4a22266fb416a3b6c258bf7f83c9fe531ba0b755a56986a81ad69dc0f3bcc070 \ + --hash=sha256:4b558ce85579b51a2e38703877d1e93b7728a7af664dd45a34e833534f0b755d \ + --hash=sha256:4d0e32530f941c41eddfc77600ec89b65184cb909c549336463a738fab3ed285 \ + --hash=sha256:4da73ebd537d75fa7bccfc2228fcaedea0803f21dd9d0bf0d3b67fef3c4af294 \ + --hash=sha256:4e2936f090bf3f4d1771f44f9077ebccdbc0415d2b598d51a969afcb519df505 \ + --hash=sha256:508069a04f658210fdeee85a7a0ca84db4bcc110cbb1d21f692caa13210f24a7 \ + --hash=sha256:5361413fd2ecfdf44dc8f065177dc6aba97fa80a91b815586cb388763acf7f8d \ + --hash=sha256:54e16e32e60973bb83c315de9975bc1bcfc9bd50bb13001c31da159bc49b0ca1 \ + --hash=sha256:5b7b09489b71f9f1f64c0fa0977e250ec24500767dab7383ba9912495849cadf \ + --hash=sha256:5cb378eaa65cd43098f11ff5d27e48ee3b956d2c00d2d6b5bfc2a09fe183be47 \ + --hash=sha256:5d6fb422772e75385b76ad1c52f45a68bd4efafd8be8d0061c11877be74c4d43 \ + --hash=sha256:5f4dd3af86dd8a617eb6464622fb64ca86e61ce99b59b5c35d8cd33f9c30603d \ + --hash=sha256:603e7d640e54ad764d2b4da6b61e126259af84f253a20f512dd10689566e5478 \ + --hash=sha256:6067f2f07a7121749858c7daa93c8774325c91590b3e81a299621e347740c2ae \ + --hash=sha256:60df43e868a615c7e15117a1e1c2e5e11f48f6457280eba6ddf8fbefbec7da99 \ + --hash=sha256:64115ccabbdbe279c24c367b629c6b1d3da9ed36c7420129e27c338a3971bfee \ + --hash=sha256:6465de861aff7a2559f226b37982007417eab8c3557543879987f58b453519bd \ + --hash=sha256:648d2f2685590b0103c67a937c2fb9e09bcc8dfb166f0c7c77bd341902a6f5b3 \ + --hash=sha256:64b433e26993127732ac7b66a7821b2537c3044355798de7c5fcb0af34b8296f \ + --hash=sha256:677e67f50e2559efc677a4366707070933ad5418b8347a603a49a070890b19bc \ + --hash=sha256:6ab0f1dbfe5070db98771a56aa14797595acd45a1af9eadfb193851a270e7996 \ + --hash=sha256:6d70b1579da7fb71be5a841a1f965d19aca0ef27f629cfc07d06b09aafd0a333 \ + --hash=sha256:6ec84668dd7b937874a2b2c293cd14ba84f37be0d196dead852e0ada9815d807 \ + --hash=sha256:6f71d92f533770fb027388b35b6e11988ab89242b883f48a6fe7202d238c61f8 \ + --hash=sha256:76b76a07d4ee611405045c6950a1e24c4362b6b44808d4ad6eea75e0dbc59af4 \ + --hash=sha256:79a9b8b05f2876c7195a2b698c47528e86a73c61ea203394ff8e7a4434bda5c8 \ + --hash=sha256:7c1f4bf6ea8eb9d7f30808c2e9894237a96650adfecbf5f3643862dc5982f89e \ + --hash=sha256:7dfefdcb0dc6a3ba9936063cec65a74595571b375beabe18742b3d91d087eefd \ + --hash=sha256:7e913098de169c7fc890638ce5e171387363eb812579e637c44261460ac00aa2 \ + --hash=sha256:7eb8be687c50da0b397d5e0ab7ca200b5ebb639e79a9f5e285851d1944c94be9 \ + --hash=sha256:7eea9318293bc0ea6447e9ebfba600a62f3428bea7e9c6d42170ae4f481dbab3 \ + --hash=sha256:852e202875dd6dfd6139ce7ec4e98dac2b17d8d25934dc99900831e81c3adaef \ + --hash=sha256:856bbe1616425f71c0df5ef2e8755e878d9504d5a531acba58ab4273c52c117a \ + --hash=sha256:87580c7f7d14f7ec401eda7adac1e2a25e95153e9c339872c8ae61b3208819a1 \ + --hash=sha256:87abb7f80c0a042f3fe8e5264da1a2756267450bb602110d5327b8eaff7682e7 \ + --hash=sha256:90e3a281ffe3897991091b7c46fca38c2675bfd4399ffe79dfeded6c52715436 \ + --hash=sha256:917905de565d9576eb20f53c797c15ba88b9f4f19728acabec8d01eee1d3756a \ + --hash=sha256:9521f49ae121a17c0a41e5112249e6fa7f6a571245b1118de81fb86e7c1bc1ce \ + --hash=sha256:962892646599529917ef26266091e4cb3077c88b93c3833a909d68dcc971c4e3 \ + --hash=sha256:9ae5b0657380d2581e13e46864d147a52c1e2bbac9f59b59c576e42fa7d10cf0 \ + --hash=sha256:9bbcfc7c279e8d74b076e514e669b683f77b4a2a328585b3f16d4c5259c91222 \ + --hash=sha256:a035da89c959d98afc813e3c62f052690d67cfd55a36592f25d734b70de7d4b0 \ + --hash=sha256:a09c4f81635408e3387348f415521d4b94198c562c23330f560596a6aaa26eaf \ + --hash=sha256:a23397da092ef0a8cfe729571da64c2fc30ac18243caa82ac7c4f965087506ff \ + --hash=sha256:a484061616fb4b158b80789bd3cb511f399d2116525a8b29b6334c68abc2310f \ + --hash=sha256:a5cc9381fd54f3c23ae1039f977bfd6d041a5c3c1518104f616643c3a5a73b15 \ + --hash=sha256:a620d8ce4ea2f1c73c6b6b1399e14cb68c6915e2be3fad5808c2998ed55b4acf \ + --hash=sha256:a6cc6545d6d76542aee3d18c1c9485fb7b9812b8df4ebe52c4535ec42081b48f \ + --hash=sha256:a8873089be2aa15494c0f81af1209f6e1237d762c5065bc4766c1b84321e1b50 \ + --hash=sha256:a8f286a51a32323715d77755ed959f94bef13972e9a2fe71b609e40e6d27957e \ + --hash=sha256:aeb60962ec4813c539a59fbd4f383509c7222b62c3fb1faa76b54943a613e33a \ + --hash=sha256:b069ca9bf728e0c5c5b60e00a89df9af34cc170c695c3bfa3b372d8f40288efb \ + --hash=sha256:b0ef2d0a6f1502d38d911d25609b44c6cc27bee0a4363dd295df78b075041b60 \ + --hash=sha256:b306c4cf66912511422060f7f5e1149c8bdb404f8e00e600561b0749fdd45659 \ + --hash=sha256:b35bfcb08b7693ab4bf9059111a6e9f14e07d57ac93cd967c420db58ab9b71e1 \ + --hash=sha256:b44105792fbdcfbda3e26ee88786790fda409da4c71f6c2b73888108cf8f062f \ + --hash=sha256:b76ffec27c7450b8a334f967366a9ebadaea66ee43f5b530c12861b1a991f503 \ + --hash=sha256:ba0734aa300757c924f3faf8148e1b8c247176a0ac8e16aefdf9c1eb19e868f7 \ + --hash=sha256:bb198c6ed1edbcdaf3d1fa3c9c9d1cdb7e179a5134ef5ee660b53cdec43b34e7 \ + --hash=sha256:bb6b86cfdfc503e92cb71c68766a24565359136961642504a7cc9faf936d9c88 \ + --hash=sha256:be94e5a685e60f9d24532af8fe5c268002e9016fa80272a94727f435de3d1003 \ + --hash=sha256:bed637b674db5e6c8a97a4a321e3e4d73e72d50b5c6b29950008a93069cc64cd \ + --hash=sha256:c5b399ae6ab975257ec359f03b48fc00b1c1cd109471e41903548469b8feae5c \ + --hash=sha256:c71d1cabdeee0cdda4669168618f0e46b7dace207b29da7b63aaa1adc2b54081 \ + --hash=sha256:c7d16beeaaab15b075990cd26963d6b5b22e8c5becd131781514a00b8bdd04bd \ + --hash=sha256:c8919fdbd3bb596b104388b56ae4b266eb28da1f2f7dff2e1f9334a21840fe96 \ + --hash=sha256:c9b87baa7bfff9a5878fcc1bffe49ecde6e647a72a64b39a69cd8a2992a43a34 \ + --hash=sha256:cd56b8ae87ebc71bcacbd73615098e8a8de952ecbb5785b6b4e2b07da8a06e1f \ + --hash=sha256:cd926e8ae4d1ed1ac4a8f37212a62886292f692bc1739fde98013bf210c2d175 \ + --hash=sha256:cf0620da2b81946d28c0b16f3e3704d38e9837d85ee4f0652816e2609aaa4fed \ + --hash=sha256:d14c790b91f6cbcd9b718f88ed737c78939980c69ac8c7f03dd7e60040c12951 \ + --hash=sha256:d4bba8042ea6ab331ade91bc435d81ad72fddb098e49108610b0ce7780c14e68 \ + --hash=sha256:d527172919cdea1e13994a66d9708a80c3d33dedcf2f0548e4925e600fef3a3a \ + --hash=sha256:d656ad38c942e38a470ddbce26b5020e08e1a7ea86b8fd413bb9024b5189993a \ + --hash=sha256:d6fe315355cdfe3ed22ef355b8bdc81a805ca4d0949d921576560e5b227a1112 \ + --hash=sha256:d91406f413ccbf4af6ab5ae7bc78f772a95609f9ddd14123db36ef8c37116d95 \ + --hash=sha256:dac2399ee2889fbdd3472bfc2ede74c34cceb1ccf29a339964281a16eb1d3188 \ + --hash=sha256:dbaf2bb71d6027152d603f1d5f31e0dfd5e50173d06f877bec484e5396d4594b \ + --hash=sha256:e064caa55a6ed493aca1eda06f8b3f689778bc780a75e6ad7724642ba5dc62f7 \ + --hash=sha256:e40b3cb9fa1edb4e0175d7c06345c49c7925fe93e39ef55ecb0bc40c906b0c09 \ + --hash=sha256:e49066d251dbbe4e6e3a5c3937d85b589e40e2669ad0eef41a00f82ec17d844b \ + --hash=sha256:e6ec283d4741befb86e8c3ea2e9ac1d17416c956d392107e45263e736954b1f7 \ + --hash=sha256:e788608ed7767b7b3bbde6d49058bccdf94df0de9ca75d13aa99020cc7e68095 \ + --hash=sha256:e8a9475d415ef1eaae7942df6f780fa4dcd48fce32825eda591a17abba869299 \ + --hash=sha256:e8da5355d7d75a52df5b84750989e34e39919ec7e59fafc4c104cc1607ab2d31 \ + --hash=sha256:ea1923d2e7880f9e1959e035da661767b5a2e16a45dfd57d6aa831e8b65ee1bf \ + --hash=sha256:ea816dc8f8e65841a8bbdd30e921edffeeb6f76efe6a1eb0da147b60d539d1cf \ + --hash=sha256:eb7a9d8a2e400a1026de341ad48e21670a6261a75b06df162c5c39b0d0e7c8f4 \ + --hash=sha256:eceb551dfeaf19c609003a69a0cf8264b0efd7abc3791a11dfabf4788daf0d19 \ + --hash=sha256:ed0f7982f10581bb16553719e5e8f933e003f5b22f7d25a68bdb30fac630a6ff \ + --hash=sha256:f00079f8e69d75c2a417de7961a77612bb77ef46c09bc74607d86de4740771ef \ + --hash=sha256:f0b84fc50b6dbeced4fa390688c07c10a73222810fb0e08392bd1a1b8259de36 \ + --hash=sha256:f135e804986b12bf14f2cd1eb86674c47dea86c4c5f0fa13c88978876b97ebe6 \ + --hash=sha256:f2de9a31c34e543ae089fd2a5ced01292f725190e379921384f695e2d7184bd3 \ + --hash=sha256:f2f8692f95c9e377eb19ca519d30d1f884b02feb7e115f798de47570a359e43f \ + --hash=sha256:f4dcadb7b8034aa3491ee8f5a69b3d9ba9d7d1e55c3cc1fc45be313e708277f8 \ + --hash=sha256:f4f44381b0a4bdf64416082f4f0e7140377ae962c0ced6f983c6d7bbfc034040 \ + --hash=sha256:f708e91fdbe443f3bec2df394ed42328fb9b0446dff5cb4199023ac6499e09fd \ + --hash=sha256:f9346e98fc2abcef90b942973087e2462af6d3e3710e82938078d3493f7fef52 \ + --hash=sha256:fc6d3e80dd8239850f2604833ff3168b28909c8a9357abfed95632cccd17e3e7 \ + --hash=sha256:fe71fd4b76380c2772f96f1e53a524da7063645d647a4fcd3b651bdd80ca0f2e + # via -r third_party/python_requirements.in certifi==2023.11.17 \ --hash=sha256:9b469f3a900bf28dc19b8cfbf8019bf47f7fdd1a65a1d4ffb98fc14166beb4d1 \ --hash=sha256:e036ab49d5b79556f99cfc2d9320b34cfbe5be05c5871b51de9329f0603b0474