I am currently working on concepts for a functionally safe application that supports Hailo-8 inference. This is a theoretical approach for now, but our main interest is in detection networks like yolov8.
I understand that the deployment process is a highly optimized endeavor to achieve high throughput and optimal resource utilization, and the resolution of the input image will probably become a hard constraint for resource allocation.
To ensure functional safety, test images would need to be passed through the network regularly and the results compared to the detection reference results. This ensures that the network, weights and biases are valid. However, the test would take the entire inference time of a full image. I would prefer to reduce the image size to a much smaller resolution, e.g. 32x32 instead of 640x640, just for functional testing purposes. The resolution switch should be dynamic whenever a test inference is inserted (like once per second).
Is there a way to achieve this without using a completely independent second network? The Yolo network topologies would allow variable resolutions of the input images.
Thanks,
André