Hi everyone,
I’m working on optimizing and compiling a custom ONNX model for deployment on the Hailo platform. So far, I’ve successfully parsed the ONNX into a HAR file and used a subset of images from my dataset to run the optimization.
However, I’m running into an error during the optimization phase, specifically related to a matmul2
layer. Here’s the error message:
AccelerasUnsupportedError: layer model/matmul2 does not support shift delta.
To overcome this issue you should force larger range at the inputs of the layer using command:
quantization_param([model/matmul2], force_range_in=[range_min, range_max], force_range_index=index)
Current input ranges:
input 0: [0.003, 0.006]
input 1: [-1.143, 1.311]
Suggested fix:
quantization_param([model/matmul2], force_range_in=[0.069, 0.138], force_range_index=0)
quantization_param([model/matmul2], force_range_in=[-26.209, 30.061], force_range_index=1)
I tried adding these quantization_param
lines to the .alls
file, but ran into a validation error:
ValidationError: 1 validation error for ModelOptimizationConfig
translation_config → layers → model/matmul2 → force_range_in
0 must be in range (type=value_error)
To address that, I tried setting the first input range to [0, 0.138]
, which got past the validation, but the original shift delta
error still occurs — just with updated suggested ranges.
My questions:
- Is there a best practice for choosing
force_range_in
values that avoids this shifting issue? - Should I be scaling the input data differently during dataset preparation?
- Any tips for debugging or tuning range settings for custom layers like this?
Any guidance would be super appreciated — thanks in advance!