`hailo optimize` fails for RTMDet-Seg model with Keras statistics shape error (Hailo8L)

Hello,

I am trying to run an RTMDet segmentation ONNX model on a Hailo8L.  
Parsing works after some ONNX fixes, but `hailo optimize` fails during statistics collection with a Keras shape error.  
Without a successful optimize/quantization pass, `hailo compiler` refuses to generate a HEF because there are no quantized weights.

## Environment
- Hailo DFC Version: 3.33.0
- HailoRT Version: 4.23.0


The starting point is an RTMDet-Seg ONNX model with input:

- `input`: `[1, 3, 736, 448]` (NCHW)

Initially, when running the parser, I got:

> `TypeError: The element type in the input tensor is not defined.`  
> in the path: `onnx_graph.get_tile_repeats() -> numpy_helper.to_array`

The reason was dynamic `Tile` nodes (`/Tile_1`, `/Tile_2`) whose `repeats` were computed at runtime via:

- `Shape -> ConstantOfShape -> Expand -> Concat -> Tile`

Using ONNX Runtime, I evaluated the actual `repeats` values once:

- `/Concat_17_output_0` → `int64[4] = [1, 1, 1, 1]`
- `/Concat_19_output_0` → `int64[5] = [1, 19, 1, 1, 1]`

Then I modified the ONNX so that `/Tile_1` and `/Tile_2` take static int64 initializers as their `repeats` inputs.

With this change, the model .onnx parses successfully.

So I now have a valid HAR (`model.har`), and the recommended end nodes are: `/Sigmoid`, `/Tile_2`, `/Concat_7`, `/Concat_5`.

When I run `hailo optimize` on this HAR, the optimization/quantization fails in the statistics collection stage with a Keras error, both with random calibration and with a custom calibration set.

In both cases, I get the same error:

The shape of the target variable and the shape of the target value in variable.assign(value) must match. variable.shape=(1,), Received: value.shape=(6762,). Target variable: <KerasVariable shape=(1,),dtype=float32, path=mean_square_value_by_feature/accumulated_statistic>

Arguments received by ActivationOp.call(): • inputs=tf.Tensor(shape=(8, 1, 6762, 6762), dtype=float32) • fully_native=None • encoding_tensors=None • skip_stats=False • training=False • kwargs={‘cache_config’: ‘None’}

So it seems that `mean_square_value_by_feature` (or a similar stats routine) assumes a 1‑element variable, but receives a vector of length 6762, for a layer whose input is `(8, 1, 6762, 6762)`.

Questions

  1. Is this **statistics shape error** (target `(1,)` vs. value `(6762,)` in `mean_square_value_by_feature/accumulated_statistic` with input `(8, 1, 6762, 6762)`) a known issue in the Hailo SDK version I’m using (DFC 3.33.0 / HailoRT 4.23.0)?

  2. Is there any **workaround or internal configuration** to:

    • change the statistics mode (e.g. per‑tensor instead of per‑feature),
    • disable this specific statistics type for problematic layers, or
    • otherwise run a simplified quantization flow that avoids this code path?
  3. Is there a **recommended model script (`–model-script`, .all file)** configuration for models with large intermediate shapes like this, that would make the quantization statistics more robust?

  4. If this is a bug that has already been fixed, could you point me to:

    • the SDK version, or
    • a patch / internal flag
      that resolves this issue?

Hey @DHag_29,

Welcome to the Hailo Community!

I’ve reviewed the error you’re encountering, and this type of failure occurs during the optimization and statistics collection phase of the compilation flow. It’s similar in nature to other optimization-time issues we’ve seen with custom models like UNet, Deeplab, Segformer, and RTMDet, where the SDK’s internal statistics logic encounters unexpected tensor shapes.

What’s happening: The error shows a shape mismatch in the mean_square_value_by_feature/accumulated_statistic calculation – specifically (1,) vs (6762,) – which suggests the statistics collection is hitting an edge case with your model’s architecture, particularly given those large intermediate tensor shapes.

Next steps: To help us investigate this properly, could you please share the following:

  • Your HAR file
  • The original ONNX model
  • Your model script (.alls file) if you’re using one
  • Youre parsing command

This will allow us to reproduce the issue on our end and determine whether this is a bug in DFC 3.33.0 or that this model can’t be supported. If it turns out to be an issue on our side, we’ll work on a fix. We may also be able to suggest workarounds or configuration adjustments specific to your model’s topology in the meantime.

Hi @omria ,

sorry for the delayed answer. I reproduced the error with the pretrained weights of the model, because i am not allowed to share our company trained onnx files. Nevertheless the error is the same.
I will send you har, onnx and everything via a private message.

Thanks for youre Help!