problem with optimize model AccelerasUnsupportedError

I try hailo optimize model_256.har --calib-set-path dataset_rgb_500.npy

and it was error:
*raise AccelerasUnsupportedError( hailo_model_optimization.acceleras.utils.acceleras_exceptions.AccelerasUnsupportedError: layer sd_vae_256/matmul2 does not support shift delta. To overcome this issue you should force larger range at the inputs of the layer using command quantization_param([layer_name], force_range_in=[range_min, range_max], force_range_index=index) current range of input 0 is [0.001, 0.009] and input 1 is [-7.718, 7.754].You should increase the multiplication of these ranges by a factor of 7.738, e.g. you can apply factor of sqrt(7.738) to both inputs: quantization_param([sd_vae_256/matmul2], force_range_in=[0.003, 0.025], force_range_index=0) quantization_param([sd_vae_256/matmul2], force_range_in=[-21.469, 21.570], force_range_index=1) *

Then i try make script from error
cat scr.alls
*quantization_param([sd_vae_256/matmul2], force_range_in=[0.003, 0.025], force_range_index=0)
quantization_param([sd_vae_256/matmul2], force_range_in=[-21.469, 21.570], force_range_index=1)
*
If i try with script": *hailo optimize model_256.har --calib-set-path dataset_rgb_500.npy --model-script scr.alls
*
an error appears: *
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ModelOptimizationConfig
translation_config → layers → sd_vae_256/matmul2 → force_range_in
wrong tuple length 4, expected 2 (type=value_error.tuple.length; actual_length=4; expected_length=2)
*
How to solve this problem?

Hey @Mihail_Matveenko,

I see what’s happening with that wrong tuple length 4, expected 2 error - it’s a parsing issue with how Pydantic is interpreting your layer names in the force_range_in field.

Looking at your scr.alls file, you’ve got:

quantization_param([sd_vae_256/matmul2], force_range_in=[0.003, 0.025], force_range_index=0)
quantization_param([sd_vae_256/matmul2], force_range_in=[-21.469, 21.570], force_range_index=1)

The problem is that sd_vae_256/matmul2 isn’t quoted, so Python is trying to interpret it as a division operation rather than a string literal. This is messing up the parsing and turning your 2-element tuple into something with 4 elements.

Here’s what you need to fix:

  1. Put quotes around your layer name - this tells Python to treat sd_vae_256/matmul2 as a string instead of trying to divide sd_vae_256 by matmul2

  2. Use proper list syntax if you want the list format, or just pass the string directly

Try this approach (targeting the same layer twice for each input):

# scr.alls
quantization_param('sd_vae_256/matmul2', force_range_in=[0.003, 0.025], force_range_index=0)
quantization_param('sd_vae_256/matmul2', force_range_in=[-21.469, 21.570], force_range_index=1)

Or if you prefer the list syntax, make sure the layer name is a quoted string inside the list:

# scr.alls
quantization_param(['sd_vae_256/matmul2'], force_range_in=[0.003, 0.025], force_range_index=0)
quantization_param(['sd_vae_256/matmul2'], force_range_in=[-21.469, 21.570], force_range_index=1)

After updating your scr.alls file, run your optimization command again:

hailo optimize model_256.har \
    --calib-set-path dataset_rgb_500.npy \
    --model-script scr.alls

That should resolve the parsing error since force_range_in will now correctly see your 2-element float lists.

Hope this clears things up!

Hey, did you solve this issue? I’ve been hitting the exact same error, injecting quotes into my alls file didn’t change the error for me?

This advise dont help me

If i use
quantization_param(‘sd_vae_256/matmul2’, force_range_in=[0.003, 0.025], force_range_index=0) quantization_param(‘sd_vae_256/matmul2’, force_range_in=[-21.469, 21.570], force_range_index=1)
Error is:
*File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 708, in _process_void_func
self._add_scope(new_cmd)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 890, in _add_scope
new_cmd.add_scope(scope_name)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_optimization_commands.py”, line 348, in add_scope
self._input_layers = [
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_optimization_commands.py”, line 349, in
self.add_scope_to_layer(scope_names, layer, force=force) for layer in self._input_layers
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/commands.py”, line 316, in add_scope_to_layer
raise AllocatorScriptParserException(f"Invalid scope name {layer_parts[0]} exists")
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.AllocatorScriptParserException: Invalid scope name 'sd_vae_256 exists
*
And if i use
quantization_param([‘sd_vae_256/matmul2’], force_range_in=[0.003, 0.025], force_range_index=0)
quantization_param([‘sd_vae_256/matmul2’], force_range_in=[-21.469, 21.570], force_range_index=1)

Error is:
*File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 708, in _process_void_func
self._add_scope(new_cmd)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 890, in _add_scope
new_cmd.add_scope(scope_name)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_optimization_commands.py”, line 348, in add_scope
self._input_layers = [
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_optimization_commands.py”, line 349, in
self.add_scope_to_layer(scope_names, layer, force=force) for layer in self._input_layers
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/commands.py”, line 316, in add_scope_to_layer
raise AllocatorScriptParserException(f"Invalid scope name {layer_parts[0]} exists")
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.AllocatorScriptParserException: Invalid scope name ‘sd_vae_256 exists
*
It is matmul2 layer if is could help:
(‘sd_vae_256/matmul2’, OrderedDict([(‘type’, ‘matmul’), (‘input’, [‘sd_vae_256/softmax1’, ‘sd_vae_256/mul_and_add3’]), (‘output’, [‘sd_vae_256/conv26’]), (‘input_shapes’, [[-1, 1, 1024, 1024], [-1, 1, 1024, 512]]), (‘output_shapes’, [[-1, 1, 1024, 512]]), (‘original_names’, [’/mid_block/attentions.0/MatMul_1’, ‘/mid_block/attentions.0/Transpose_5’, ‘/mid_block/attentions.0/Reshape_4’]), (‘compilation_params’, {‘mixed_mem’: ‘disabled’}), (‘quantization_params’, {}), (‘params’, OrderedDict([(‘dynamic_weights’, True), (‘transpose_matmul_input’, False), (‘kernel_shape’, [1, 1, 1024, 512]), (‘groups’, 1), (‘input_windows’, [1, 1, 1]), (‘zp_correction_type’, ‘zp_comp_none’), (‘zp_comp_rank’, 0), (‘activation’, ‘linear’)]))])),

Anyone had any luck with this issue? Nothing I can find in this thread or elsewhere has been of any help.

That’s odd, let me have you try a couple things:

First, let’s double-check the layer name:
Run this to see exactly what the compiler is picking up:

hailo analyze model_256.har --print-layers | grep -n "matmul2"

Verify it shows sd_vae_256/matmul2 exactly. If there’s any difference, use whatever path it actually outputs in your script.

If the layer name matches what you had before, try switching from single quotes to double quotes:

clear_scope()
quantization_param("sd_vae_256/matmul2", force_range_in=(0.003, 0.025), force_range_index=0)
quantization_param("sd_vae_256/matmul2", force_range_in=(-21.469, 21.570), force_range_index=1)

Let me know if that fixes it - if not, I’ll need to file a bug report and see what else we can do to get this sorted out for you.

No. It doesn`t help

If I use
*hailo analyze model_256.har --print-layers | grep -n “matmul2”
*
(hailo_virtualenv) hailo@65af14b8c215:/data/sd_vae$ hailo analyze model_256.har --print-layers | grep -n “matmul2”
usage: hailo [-h] [–version]
{fw-update,ssb-update,fw-config,udp-rate-limiter,fw-control,fw-logger,scan,sensor-config,run,benchmark,monitor,parse-hef,measure-power,tutorial,analyze-noise,compiler,params-csv,parser,profiler,optimize,tb,visualizer,har,join,har-onnx-rt,runtime-profiler,dfc-studio,help}

hailo: error: argument {fw-update,ssb-update,fw-config,udp-rate-limiter,fw-control,fw-logger,scan,sensor-config,run,benchmark,monitor,parse-hef,measure-power,tutorial,analyze-noise,compiler,params-csv,parser,profiler,optimize,tb,visualizer,har,join,har-onnx-rt,runtime-profiler,dfc-studio,help}: invalid choice: ‘analyze’ (choose from ‘fw-update’, ‘ssb-update’, ‘fw-config’, ‘udp-rate-limiter’, ‘fw-control’, ‘fw-logger’, ‘scan’, ‘sensor-config’, ‘run’, ‘benchmark’, ‘monitor’, ‘parse-hef’, ‘measure-power’, ‘tutorial’, ‘analyze-noise’, ‘compiler’, ‘params-csv’, ‘parser’, ‘profiler’, ‘optimize’, ‘tb’, ‘visualizer’, ‘har’, ‘join’, ‘har-onnx-rt’, ‘runtime-profiler’, ‘dfc-studio’, ‘help’)

And if I use
*clear_scope()
quantization_param(“sd_vae_256/matmul2”, force_range_in=(0.003, 0.025), force_range_index=0)
quantization_param(“sd_vae_256/matmul2”, force_range_in=(-21.469, 21.570), force_range_index=1)
*
raise BackendScriptParserException(f"Parsing failed at:\n{e.markInputline()}")
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendScriptParserException: Parsing failed at:
!<clear_scope()

If it could help:
[info] Hailo DFC Version: 3.31.0
[info] HailoRT Version: 4.21.0

Please try:

hailo har show model_256.har --print-layers | grep -n “matmul2”

And then re run the compile:

hailo compiler model_256.har --calib-script overrides.py other flags

but if it has matmul2 and this didnt help i would be happy to take a look at the har file to check whats the issue!

This comand dont work
*(hailo_virtualenv) hailo@65af14b8c215:/data/segformer$ hailo har show model_256.har --print-layers | grep -n “matmul2” usage: hailo har [-h] {extract,info,diff} …
hailo har: error: argument action: invalid choice: ‘show’ (choose from ‘extract’, ‘info’, ‘diff’)
*
But I could show it by another way

(hailo_virtualenv) hailo@65af14b8c215:/data/sd_vae$ hailo har extract model_256.har [info] Current Time: 11:26:14, 08/22/25 [info] CPU: Architecture: x86_64, Model: Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz, Number Of Cores: 128, Utilization: 0.8%
[info] Memory: Total: 1007GB, Available: 993GB
[info] System info: OS: Linux, Kernel: 6.1.0-31-amd64 [info] Hailo DFC Version: 3.31.0 [info] HailoRT Version: 4.21.0 [info] PCIe: No Hailo PCIe device was found [info] Running hailo har extract model_256.har [info] HN extracted to: /data/sd_vae/sd_vae_256.hn [info] Params extracted to: /data/sd_vae/sd_vae_256.npz

(hailo_virtualenv) hailo@65af14b8c215:/data/sd_vae$ strings sd_vae_256.hn | grep -n -i “matmul2”
1308: “output”: [“sd_vae_256/matmul2”], 1372: “output”: [“sd_vae_256/matmul2”], 1383: “sd_vae_256/matmul2”: {
1407: “input”: [“sd_vae_256/matmul2”],

Hey @Mihail_Matveenko,
Could you send me the HAR file so I can take a look? Also, would you mind sharing your YAML and ALLS configs as well?

Hello. you sad me send HAR file and alls config.
But how could i do it? I could send only photo. (Sorry, the file you are trying to upload is not authorized (authorized extensions: jpg, jpeg, png, gif, heic, heif, webp, avif)

Now i find new way how decrease factor from 7.738 to 1.903

hailo optimize model_256.har --calib-set-path dataset_rgb_256_900.npy --model-script scr.alls

hailo_model_optimization.acceleras.utils.acceleras_exceptions.AccelerasUnsupportedError: layer sd_vae_256/mm_h_matmul2_decompose does not support shift delta. To overcome this issue you should force larger range at the inputs of the layer using command quantization_param([layer_name], force_range_in=[range_min, range_max], force_range_index=index) current range of input 0 is [0.000, 0.016] and input 1 is [-7.845, 8.040].You should increase the multiplication of these ranges by a factor of 1.903, e.g. you can apply factor of sqrt(1.903) to both inputs:
quantization_param([sd_vae_256/mm_h_matmul2_decompose], force_range_in=[0.000, 0.022], force_range_index=0)
quantization_param([sd_vae_256/mm_h_matmul2_decompose], force_range_in=[-10.822, 11.091], force_range_index=1)

this is my alls file:

(hailo_virtualenv) hailo@d180d55caa05:/data/sd_vae$ cat scr.alls
pre_quantization_optimization(matmul_decomposition, layers=[sd_vae_256/matmul2], policy=enabled, precision_mode=a16_w8)
model_optimization_config(calibration, batch_size=1, calibset_size=900)

Thanks for the alls.
You can put it in a drive or something and i will take it from there !

Hello this is link on my har file https://drive.google.com/file/d/1HVCvizCDY6cKn0qXIRTBkRwkgS85I1DE/view?usp=sharing