Error in HailoAvgPool while optimizing model

Hi,
while trying to optimize our model (LPRNet) I got following error:

info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won't be optimized and compression won't be used) because there's no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] Starting Mixed Precision
[info] Mixed Precision is done (completion time is 00:00:00.23)
[info] LayerNorm Decomposition skipped
[<hailo_model_optimization.acceleras.hailo_layers.hailo_avgpool_v2.HailoAvgPool object at 0x7b4e643c0f10>, <hailo_model_optimization.acceleras.hailo_layers.hailo_avgpool_v2.HailoAvgPool object at 0x7b4e64303ca0>, <hailo_model_optimization.acceleras.hailo_layers.hailo_avgpool_v2.HailoAvgPool object at 0x7b4e640469b0>, <hailo_model_optimization.acceleras.hailo_layers.hailo_avgpool_v2.HailoAvgPool object at 0x7b4e63d83100>]
Traceback (most recent call last):
  File "/home/username/hailo/hailo-prototyping/opt-compile.py", line 65, in <module>
    runner.optimize(calib_dataset)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 2093, in optimize
    self._optimize(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1935, in _optimize
    self._sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1045, in full_quantization
    self._full_acceleras_run(self.calibration_data, data_type)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1229, in _full_acceleras_run
    optimization_flow.run()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 306, in wrapper
    return func(self, *args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 326, in run
    step_func()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 250, in wrapped
    result = method(*args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py", line 124, in parent_wrapper
    func(self, *args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 342, in step1
    self.pre_quantization_structural()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 250, in wrapped
    result = method(*args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 380, in pre_quantization_structural
    self._apu_neg_mantissa_correction()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/tools/orchestator.py", line 250, in wrapped
    result = method(*args, **kwargs)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 480, in _apu_neg_mantissa_correction
    algo.run()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/algorithms/optimization_algorithm.py", line 50, in run
    return super().run()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/algorithms/algorithm_base.py", line 150, in run
    self._run_int()
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/algorithms/apu_neg_mantissa_correction.py", line 46, in _run_int
    self.correct_layer_neg_mantissa(layer)
  File "/home/username/hailo/venv-py3.10-dfc3.29/lib/python3.10/site-packages/hailo_model_optimization/algorithms/apu_neg_mantissa_correction.py", line 50, in correct_layer_neg_mantissa
    layer.neg_weights()
AttributeError: 'HailoAvgPool' object has no attribute 'neg_weights'. Did you mean: 'get_weights'?

I tried to use different precision modes for our AveragePool layers (via quantization_param model script), but the error persists.

It seems the error is because there is no ‘neg_weights’ in HailoAvgPool. What is the problem here? Is there a way around this?

Best Regards
JG

Hey @josef.gugglberger,

Welcome to the Hailo Community!

It seems you’re encountering an issue with the HailoAvgPool layer during model optimization with the following traceback:


Error Message:
AttributeError: 'HailoAvgPool' object has no attribute 'neg_weights'. Did you mean: 'get_weights'?


Issue Breakdown

  1. The Cause of the Error:

    • The error indicates that the optimization flow is trying to call neg_weights() on a HailoAvgPool object, which does not support that method.
    • Based on the traceback, it seems this is related to negative mantissa correction being applied during the quantization or optimization phase.
  2. Why This Happens:

    • The Hailo model optimizer appears to have applied a generic correction function meant for layers with weights, but average pooling layers (HailoAvgPool) generally do not carry trainable weights in this context, hence the missing neg_weights() method.

Workarounds and Solutions

  1. Use the Correct Optimization Precision:

    • You mentioned trying different precision modes, but the optimizer might still be applying unnecessary transformations. Make sure the precision mode being used is appropriate for the pooling layers by explicitly setting it via the YAML or CLI flags:
      hailomz optimize <model_name> --precision=float32 --exclude-layers="HailoAvgPool"
      
  2. Exclude Average Pooling Layers from Optimization:

    • You can try excluding the HailoAvgPool layer from certain optimization steps if they do not require transformation. For example:
      hailomz optimize <model_name> --exclude-layers="HailoAvgPool"
      
    • Alternatively, adjust the optimization YAML configuration to skip specific optimizations for pooling layers.
  3. Manual Layer Adjustment:

    • If the error persists, you can manually override the layer behavior in the optimizer by using Hailo’s model modification tools. Check the Dataflow Compiler API for further customization options regarding layer optimization scripts and structural modifications .

Recommendations

  • Ensure you are using the correct optimization configuration and exclude layers that do not require quantization or mantissa correction (like HailoAvgPool).
  • Check the Hailo software suite documentation for any known issues related to mantissa correction algorithms in optimization.

Let me know if you need further assistance!

1 Like

Hi,
thanks for your answer. I haven’t used hailomz until now. I followed the jupyter notebook tutorial and used ClientRunner(…).optimize(…). Is there a way to exclude layers this way?

Also I do not see the option to exclude layers using hailomz cli:

hailomz optimize -h
usage: hailomz optimize [-h] [--yaml YAML_PATH] [--ckpt CKPT_PATH] [--hw-arch]
                        [--start-node-names START_NODE_NAMES [START_NODE_NAMES ...]]
                        [--end-node-names END_NODE_NAMES [END_NODE_NAMES ...]]
                        [--model-script MODEL_SCRIPT_PATH | --performance] [--har HAR_PATH] [--calib-path CALIB_PATH]
                        [--resize RESIZE [RESIZE ...]] [--input-conversion {nv12_to_rgb,yuy2_to_rgb,rgbx_to_rgb}]
                        [--classes]
                        [model_name]

I also don’t find anything related to layer optimization scripts in the dataflow user guide.

According to the documentation (hailo_model_zoo/docs/OPTIMIZATION.rst at master · hailo-ai/hailo_model_zoo · GitHub) hailomz can only be used for models in the modelzoo. I try to optimize a custom model.

Hi @josef.gugglberger,

I understand you’re working through the Jupyter notebook tutorial and using ClientRunner(...).optimize(...) for custom model optimization. Let me address your specific questions:

  1. Regarding Layer Exclusion with ClientRunner: Unfortunately, ClientRunner(...).optimize(...) doesn’t have built-in options to directly exclude specific layers. However, you have a couple of workarounds: A) You can customize the optimization flow by subclassing the default flow to skip certain layers like HailoAvgPool. This gives you more control over layer-specific transformations. B) Alternatively, you can adjust precision settings to prevent unwanted transformations on specific layers. For example, using float32 precision or providing a YAML configuration file with specific exclusion rules.
  2. About hailomz for Custom Models: You’re correct - hailomz is specifically designed for models in the Hailo Model Zoo and doesn’t fully support custom models. This explains why you won’t find exclusion features or relevant documentation in the dataflow user guide for your custom model use case.

For your custom model optimization, I recommend:

  • Customizing the optimization flow through ClientRunner to skip specific layer transformations
  • Adjusting precision settings to avoid unnecessary operations
  • Using YAML configuration when possible to fine-tune the optimization behavior

Let me know if you need more specific guidance on implementing any of these approaches!