Multiple image size support on detection interence?

I am currently working on concepts for a functionally safe application that supports Hailo-8 inference. This is a theoretical approach for now, but our main interest is in detection networks like yolov8.

I understand that the deployment process is a highly optimized endeavor to achieve high throughput and optimal resource utilization, and the resolution of the input image will probably become a hard constraint for resource allocation.

To ensure functional safety, test images would need to be passed through the network regularly and the results compared to the detection reference results. This ensures that the network, weights and biases are valid. However, the test would take the entire inference time of a full image. I would prefer to reduce the image size to a much smaller resolution, e.g. 32x32 instead of 640x640, just for functional testing purposes. The resolution switch should be dynamic whenever a test inference is inserted (like once per second).
Is there a way to achieve this without using a completely independent second network? The Yolo network topologies would allow variable resolutions of the input images.

Thanks,
André

Hi @andre.koehler,

First of all welcome to the Hailo Community!

The short answer is unfortunately no.

That’s the case because of what you correctly guessed - the resource allocation of the model on the device is highly dependent on the input size. Once the model is compiled, the resource allocation is already set, and there’s no way to have multiple resource allocations in the same compiled model.

The only way is to have a second version of the same model exported with a lower input dim.

Thank you for clarification!