Hailo8l maximum input dimension

Hello, I trained a custom YOLO11l model for my Hailo8l device, but I can’t compile it to a HEF model.

I am performing the following steps:

  1. yolo export model=yolo11l.pt format=onnx imgsz=1664,2592 opset=15
    2) hailo parser onnx yolo11l.onnx --hw-arch hailo8l --start-node-names images --end-node-names output0 --tensor-shapes images=[1,3,1664,2592]
  2. hailo optimize yolo11l.har --hw-arch hailo8l --use-random-calib-set
  3. hailo compiler yolo11l_optimized.har --hw-arch hailo8l

In the 4th step, I get the error:
“matmul1 failed on kernel validation: 16x4 is not supported in matmul1”.

When I increase the dimension a little more, I get the error:
“BackendAllocatorException: Compilation failed: Reshape is needed for layers: reduce_max_softmax1, ew_sub_softmax1, reduce_sum_softmax1, ew_mult_softmax1, matmul2, reduce_max_softmax2, ew_sub_softmax2, reduce_sum_softmax2, ew_mult_softmax2, matmul4, but adding a reshape has failed.”

Questions:

  1. What are the maximum dimensions supported?
  2. Does the dimension support have anything to do with whether it is a square or a rectangle?

Hey @John_Doe ,

What are the maximum dimensions supported in DFC compilation?

So the DFC docs don’t actually give you a hard upper limit for input tensor dimensions (like width × height), but there are definitely some internal constraints you need to watch out for:

  • Transpose layers that involve Height ↔ Width or Height ↔ Features only work if the entire tensor is smaller than 1.5MB when quantized. So that’s kind of your practical ceiling.
  • The compiler can choke when internal layers (matmul, softmax, etc.) have to deal with tensor shapes that are too narrow, too small, or just weird (like 16×4).
  • Global average pooling and reshape operations are really picky about spatial resolution and feature sizes - they’ll fail if the resulting shapes can’t be tiled into supported formats.

In my experience, input shapes like 1280×720, or 640×640 or 1280x1920 compile pretty reliably. But really wide or tall images (like 2592×1664) often cause shape mismatches or tiling issues.

Does support depend on aspect ratio (square vs. rectangle)?

Yeah, aspect ratio definitely matters for compilation, but it’s not like the compiler explicitly requires square shapes.

The issue is that aspect ratio affects:

  1. Intermediate tensor shapes - long rectangles often create feature maps with really narrow width or height, which can lead to unsupported matmul or softmax shapes.
  2. How well layers align with hardware tiles - square or near-square tensors work better with Hailo’s fixed-size hardware tile architecture, so compilation is more likely to succeed.

The compiler doesn’t reject rectangular inputs on purpose, but internal ops (softmax, reshape, reduce) can become invalid due to shape incompatibility or hardware tiling rules.

My recommendation:

  • Reducing your image size (like from 1664×2592 down to 1280×1920 or 720×1280)
  • Stick to standard aspect ratios: 4:3, 16:9, 1:1