Creating yuyv dataset problem

I have this model script for yolov6n model:

alls = [

 'norm_layer1 = normalization([0.0,0.0,0.0], [255.0,255.0,255.0])\n'

 'resize_input1 = resize(resize_shapes=[480,640],engine=nn_core)\n'

'rgb1 = input_conversion(yuv_to_rgb)\n'

'yuy2_to_yuv1 = input_conversion(input_layer1, yuy2_to_hailo_yuv)\n'

 ]

When I create yuyv dataset in 480x640 resolution and trying to create har file, I get this error:

hailo_model_optimization.acceleras.utils.acceleras_exceptions.BadInputsShape: Data shape (480, 640, 2) for layer detector/input_layer1 doesn't match network's input shape (480, 640, 3)

(480, 640, 3) is RGB format, and I have conversion in script from yuyv (480, 640, 2) to RGB (480, 640, 3), so why does the code want RGB as an input?
I am able to force (480, 640, 2) in (480, 640, 3) by just adding zeros in free space, and then code works and compiles .hef model eventually, but the model doesn’t work, I wondering that this forces conversion might be the cause

Hey @p.chuchkalov ,

The Hailo Dataflow Compiler will always validate your host-side input tensor against the original network’s input shape (in your case (480, 640, 3)). Adding input_conversion layers via your model script only inserts on-chip format/color conversions after the input node—it does not change the parser’s notion of what the network’s “external” input shape is when you call runner.optimize(). As a result, when you feed it a 2-channel (480, 640, 2) array, it complains that this doesn’t match the network’s expected three-channel input