How to resizing Input format for YOLOv5s with Hailo Parser

Hi Hailo

Currently, I want to run an I420 input image(540, 640) to yolov5s model(640, 640).

The image flow I’m considering is..
image[540, 640] → i420 to hailo yuv → yuv to rgb → resize [640, 640] → yolov5s model.

However, to change the input image format, it seems impossible to set the input shape to (540, 640) using the hailo parser onnx --tensor-shapes

Additionally, modifying net_input_shapes the method provided in the Hailo tutorial doesn’t seem to work either.

And here is the model script that I used

model_script_commands = [
‘normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])\n’
‘change_output_activation(sigmoid)\n’
‘resize_input1 = resize(resize_shapes=[640,640])\n’
‘yuv_to_rgb1 = input_conversion(yuv_to_rgb)\n’
‘format_conversion = input_conversion(input_layer1, i420_to_hailo_yuv, emulator_support = False)\n’
‘nms_postprocess(“./yolov5s_personface.json”, yolov5, engine=cpu)\n’
‘post_quantization_optimization(finetune, policy=enabled, learning_rate=0.0001, epochs=4, dataset_size=10, loss_factors=[1.0, 1.0, 1.0], loss_types=[l2rel, l2rel, l2rel], loss_layer_names=[conv70, conv63, conv55])\n’
]

Thanks

Hi @roiyim
The model is configured to accept (640,640) input. Can you please elaborate why you need to change onnx input shape to (540,640)?

Hi shashi

The input image size used in our application is 540 x 640.
Currently, we are resizing the images on the host before inputting them into the Hailo8, but the performance is not good.
Due to the high cost of other applications running on the host, we intend to reduce host computation.
We hope to utilize the resize feature within the Hailo8, as mentioned in the DFC document.
If it is possible to input 540 x 640 images into the Hailo8 and transform them to 640 x 640 using the resize function, I would like to know the method to do so.

Thanks

model_script_commands = [
    'normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])\n'

    'resize1_layer = resize(resize_shapes=[540,640])\n'
    'i420_to_yuv_layer, yuv_to_rgb_layer = input_conversion(input_layer1, i420_to_rgb, emulator_support=False)\n'

    'change_output_activation(sigmoid)\n'
    'nms_postprocess(“./yolov5s_personface.json”, yolov5, engine=cpu)\n'
    'post_quantization_optimization(finetune, policy=enabled, learning_rate=0.0001, epochs=4, dataset_size=10, loss_factors=[1.0, 1.0, 1.0], loss_types=[l2rel, l2rel, l2rel], loss_layer_names=[conv70, conv63, conv55])\n’
]

This is what your alls script should look like. The resize command’s resize_shape argument actually takes the input shape not the resulting desired shape. Technically, your input_conversion and format_conversion are correct, I just used a hybrid conversion format instead. This will give you a model with input shape [? x 270 x 640 x 3], which is i420( 540x640 ).

Hi Lawrence

Thank you for the fast reply.
However, when using the model script above, the following error occurs

BadInputsShape: Data shape (640, 640, 3) for layer yolov5s/input_layer1 doesn’t match network’s input shape (540, 640, 3)

This is the reason why I tried to changing the input shape of the ONNX model.
I would like to ask if there is a way to modify the input shape to (540, 640) when converting a YOLOv5 model based on 640x640 to HAR.
Or want to know how to change the W and H ratio in the Resize input layer through the model script.

Or please let me know if I missed something

Thanks

@roiyim

What is your actual input shape of the image(s)? I am assuming the actual input shape you have is 540h x 640w in RGB. I am also assuming that the onnx accepts 640x640 in RGB.

What I posted above changes the input size to 540h x 640w, and internally hailo will resize to 640 x 640 and then continue with the rest of the neural network. Since it takes i420 format as input, the actual input size will be 270h x 640w.

If your i420 input shape is 540h x 640w, then the resize command actually needs to be

    'resize1_layer = resize(resize_shapes=[1080, 640])\n'