Hey @Yakiv_Baiduk ,
Welcome to the Hailo Community!
When you see
AccelerasNumerizationError: Input zero points to concat op … are not all equal
it means that the branches you’re trying to concatenate have been quantized with different zero‐points/scales, and at optimization_level ≥2 the compiler refuses to merge them. There are two common ways to fix this:
Option 1: Force everything to the same range If your branches actually end up in the same data range after normalization, you can just force them to use identical quantization parameters. I usually do something like:
# Replace these with your actual layer names
quantization_param(branch1, force_range_out=[0.0, 1.0])
quantization_param(branch2, force_range_out=[0.0, 1.0])
# Sometimes I need to set it on the concat layer too:
quantization_param("xfeat/concat_layer_layer_normalization1/concat_op", force_range_out=[0.0, 1.0])
model_optimization_flavor(optimization_level=2)
This forces them all to share the same zero-point so concat works fine.
Option 2: Per-channel encoding If your branches legitimately need different ranges (like one outputs probabilities [0,1] and another outputs raw logits [-5,5]), then per-channel encoding is your friend:
# Let each channel have its own scale/zero-point
model_optimization_config(globals, output_encoding_vector=enabled)
# Need to disable muxing or it won't work
allocator_param(enable_muxer=False)
model_optimization_flavor(optimization_level=2)
Personally, I go with option 1 if the ranges are actually similar post-normalization since it’s cleaner at runtime. Option 2 is great when you genuinely need different dynamic ranges or have lots of concatenations happening.
Either way should get you past that compiler error and running smoothly at optimization level 2.
Hope this helps! Let me know how it goes.