Help Needed: Issues with Detection Performance and Multi-Context HEF on Hailo Device

Hello,

I’m encountering a few challenges while working with a Hailo device and would greatly appreciate any insights or guidance.

Due to limited resources and the image processing performed on the embedded device, we are using blocking inference. This means one image is sent to the Hailo device, and detections are collected sequentially.

Here’s the issue:

  • I have a model that performs exceptionally well. However, after converting it to a HEF file, I end up with a single-context HEF (31 MB) and notice significantly reduced detection performance.
  • I tried optimizing with max optimization level and 1024 images during the conversion. This resulted in a 4-context HEF file (26 MB), but the detection performance degraded even further.

I’m wondering if this issue could be related to blocking inference.

Additionally, I’m curious about how multiple contexts are executed on the Hailo chip. Does this execution mode affect detection performance under blocking inference?

For context, I’m using:

  • Dataflow Compiler version 3.24
  • HailoRT 4.14

If anyone has encountered similar issues or can offer guidance on improving detection performance in this scenario, I would be grateful!

Thank you in advance!

I do not think so. The results should be the same no mater how you run the inference.

Multi-context models are executed one context (or model part) at a time. Each context is loaded sequentially. The image is sent to the device, intermediate data is returned to the host, and then the next context is loaded. This process continues until the entire computation is completed.

If you need higher throughput for a multi-context model you can trade of latency for throughput by using batches of images. This will reduce the switching overhead.

I would recommend to update to the latest version of our tools.

I would recommend to run the layer analysis tool to find out what layer are causing the degradation. Check out the advanced optimization tutorials.

Also make sure your pre- and post-processing works as expected. We sometimes see that users have errors related to the BGR format used by OpenCV, normalization done on chip and the image (so twice) or none at all. So, make sure you verify the model after parsing and after optimization using the emulator.

Hello,

thank You for the answer. I will try to use the latest libraries.
Just one more question.

I get the following warnings:

/media/data/onnxtohef/lib/python3.10/site-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning:

TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).

For more information see: TensorFlow Addons Wind Down · Issue #2807 · tensorflow/addons · GitHub

  warnings.warn(
/media/data/onnxtohef/lib/python3.10/site-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.13.0 and strictly below 2.16.0 (nightly versions are not supported). 
 The versions of TensorFlow you are currently using is 2.9.2 and is not supported. 
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. 
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
  warnings.warn(
Model conversion started

And:
[warning] Output order different size

Are these warnings critical(still dataflowcompiler version 3.25).

If I would switch to dataflowcompiler latest version, and still using HailoRT 4.14, would that have sense?
I assume, as the major version did not change, that there is still downwards compatibility?

Thank You for Your patience and answers!

One more thing, I am using onnx version 1.12 to export pt to onnx. Should I maybe downgrade/upgrade? Thanks!