Runtime Error with global_avgpool_reduction optimization option

Hi !

I am using the global_avpool_reduction option of the DFC to address the issue outlined in this discussion. This error does not occur if I optimize the model without this option. What am I doing wrong?

pre_quantization_optimization(global_avgpool_reduction, layers=avgpool1, division_factors=[4, 4])

When I try to inference the model, I am getting a runtime error.

with runner.infer_context(InferenceContext.SDK_QUANTIZED) as ctx:
    outputs = runner.infer(ctx, img)

The traceback is

   File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/keras/engine/training.py", line 2169, in predict_function  *
        return step_function(self, iterator)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/keras/engine/training.py", line 2155, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/keras/engine/training.py", line 2143, in run_step  **
        outputs = model.predict_step(data)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/keras/engine/training.py", line 2111, in predict_step
        return self(x, training=False)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/tmp/__autograph_generated_file8zz54eu5.py", line 12, in tf__call
        retval_ = ag__.converted_call(ag__.ld(self)._model, (ag__.ld(inputs),), dict(**ag__.ld(kwargs)), fscope)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/utils/distributed_utils.py", line 122, in wrapper
        res = func(self, *args, **kwargs)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 1069, in build
        self.compute_output_shape(input_shape)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 1013, in compute_output_shape
        return self.compute_and_verify_output_shape(input_shape, verify_layer_inputs_shape=False)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 1047, in compute_and_verify_output_shape
        layer_output_shape = layer.compute_output_shape(layer_input_shapes)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/hailo_layers/base_hailo_layer.py", line 1502, in compute_output_shape
        op_output_shape = op.compute_output_shape(op_input_shapes)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/atomic_ops/base_atomic_op.py", line 710, in compute_output_shape
        shapes = self._compute_output_shape(input_shape)
    File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/atomic_ops/conv_stripped_op.py", line 1136, in _compute_output_shape
        h_out, w_out = self._spatial_output_shape(input_shape[1:3])

    ValueError: Exception encountered when calling layer 'simulation_inference_model_17' (type SimulationInferenceModel).
    
    in user code:
    
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/flows/inference_flow.py", line 135, in call  *
            return self._model(inputs, **kwargs)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/utils/distributed_utils.py", line 122, in wrapper
            res = func(self, *args, **kwargs)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 1069, in build
            self.compute_output_shape(input_shape)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 1013, in compute_output_shape
            return self.compute_and_verify_output_shape(input_shape, verify_layer_inputs_shape=False)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 1047, in compute_and_verify_output_shape
            layer_output_shape = layer.compute_output_shape(layer_input_shapes)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/hailo_layers/base_hailo_layer.py", line 1502, in compute_output_shape
            op_output_shape = op.compute_output_shape(op_input_shapes)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/atomic_ops/base_atomic_op.py", line 710, in compute_output_shape
            shapes = self._compute_output_shape(input_shape)
        File "/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_model_optimization/acceleras/atomic_ops/conv_stripped_op.py", line 1136, in _compute_output_shape
            h_out, w_out = self._spatial_output_shape(input_shape[1:3])
    
        ValueError: not enough values to unpack (expected 2, got 1)
    
    
    Call arguments received by layer 'simulation_inference_model_17' (type SimulationInferenceModel):
      • inputs=tf.Tensor(shape=(None, 416), dtype=float32)
      • kwargs={'training': 'False'}

I am using Hailo DFC version 3.28.0 .

Hey @stwerner ,

It appears the global_avgpool_reduction option in the Hailo Dataflow Compiler (DFC) is causing your runtime error. According to our documentation, this function reduces spatial dimensions of global average pooling layers by adding an avgpool layer with a kernel size computed as:

Kernel Size = [1, h // division_factors[0], w // division_factors[1], 1]

Potential Issues and Solutions:

  1. Division Factors Shape Mismatch:

    • The error “not enough values to unpack (expected 2, got 1)” indicates your division_factors parameter may be incorrectly configured
    • Ensure your division_factors list contains exactly two values (e.g., [4, 4])
    • If your tensor has an unusual shape, try [1, 1] to maintain default behavior
  2. Layer Name Verification:

    • Confirm that avgpool1 matches exactly the name of the pooling layer in your model
    • You can print all layer names to verify:
    for layer in model.layers:
        print(layer.name)
    
  3. Quantization Compatibility:

    • Since your error occurs only in InferenceContext.SDK_QUANTIZED, the quantization process may not support the added pooling layer
    • First test with SDK_FP_OPTIMIZED to isolate the issue:
    with runner.infer_context(InferenceContext.SDK_FP_OPTIMIZED) as ctx:
        outputs = runner.infer(ctx, img)
    
    • If it works in SDK_FP_OPTIMIZED but fails in SDK_QUANTIZED, your quantized model structure requires adjustment

Please try these steps and let us know if you continue experiencing issues.

Hi @omria ,

Thanks for the suggestions. I have addressed the issue by taking out the channel attention blocks (SE blocks) from my model. The compilation now works. I’ll also try your suggestions.

Best Regards