When running the hailo compiler command
hailo compiler --hw-arch hailo8l model_optimized.har
The following error is thrown
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[info] Finding the best partition to contexts...
Performance / Iteration
# Green vertical bars
Iteration #86 - Contexts: 6
[error] Mapping Failed (allocation time: 8m 51s)
Output Height < 2 is unsupported
[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: Output Height < 2 is unsupported
It is unclear which output height they are referring to and how to fix this.
Can anyone provide some context and provide some tips for solving this issue?
All previous steps are successful, the onnx model is converted to har and quantized.
omria
2
Hey @m.wijkhuizen
The error occurs when a network layer tries to output a tensor with a height less than 2, which isn’t supported by the hardware. Here’s how to fix it:
- Use tools like Netron to check your model’s output dimensions
- Add padding or upsampling layers if needed to maintain minimum height requirements
- Review compiler settings for padding/optimization options
- Use Hailo’s profiling tools to identify the problematic layer
Thanks for your reply.
How could I use Hailo’s profiling tools to determine the problematic layer?
omria
4
Hi @m.wijkhuizen,
Let me help you identify the problematic layer using Hailo’s profiling tools.
To find and fix the issue:
- First, let’s profile your model to see the layer outputs and dimensions:
hailo_profile --hef model_optimized.har --report model_profile_report.json
- Inspect the report for layers with output height less than 2:
cat model_profile_report.json | jq '.layers[] | {name: .name, output_shape: .output_shape}'
Once you identify the problematic layer, here are some ways to fix it:
- Add padding or upsampling to increase the output tensor height
- Modify stride or kernel size if it’s a downsampling issue
- Check intermediate layers that might be reducing height too aggressively
If you’re using a GStreamer pipeline, you can also enable real-time profiling:
hailonet hef-path=model_optimized.har profile=true !
After making adjustments, recompile your model to verify the fix:
hailo compiler --hw-arch hailo8l model_optimized.har
Let me know if you need help implementing any of these solutions or if you run into other issues!
Thanks for the response.
Changing the output from Bx1xN to BxN did solve the issue indeed!
1 Like