Trouble compiling yolov8m compilation guide

Hi,
I am trying to use the DFC to compile my yolov8m model into an hef file. From what I understand, there are three steps: Parsing, Optimization, and finally Compilation. I have been trying to use the tutorial jupyter notebooks (from the hailo tutorial command in Ubuntu)

I first exported the .pt file of my yolov8m model as .onnx.

In the parsing tutorial, I used the translate_onnx_model() function as follows (onnx_path and onnx_model_name were provided earlier in the code):

runner = ClientRunner(hw_arch=chosen_hw_arch)
hn, npz = runner.translate_onnx_model(
    onnx_path,
    onnx_model_name,
    start_node_names=["images"],
    end_node_names=["/model.22/Concat_3"],
)

(I did chosen_hw_arch = "hailo8l" earlier since I am trying to use the Hailo-8L Entry-Level AI Accelerator).

I don’t suspect I have made any mistakes here in the parsing part, but if I have, please do correct me.

From what I can understand from the tutorial, we need to first prepare a calibration set, for which the following preproc() function is given in the tutorial:

def preproc(image, output_height=224, output_width=224, resize_side=256):
    """imagenet-standard: aspect-preserving resize to 256px smaller-side, then central-crop to 224px"""
    with eager_mode():
        h, w = image.shape[0], image.shape[1]
        scale = tf.cond(tf.less(h, w), lambda: resize_side / h, lambda: resize_side / w)
        resized_image = tf.compat.v1.image.resize_bilinear(tf.expand_dims(image, 0), [int(h * scale), int(w * scale)])
        cropped_image = tf.compat.v1.image.resize_with_crop_or_pad(resized_image, output_height, output_width)

        return tf.squeeze(cropped_image)


images_path = "../data"
images_list = [img_name for img_name in os.listdir(images_path) if os.path.splitext(img_name)[1] == ".jpg"]

calib_dataset = np.zeros((len(images_list), 224, 224, 3))
for idx, img_name in enumerate(sorted(images_list)):
    img = np.array(Image.open(os.path.join(images_path, img_name)))
    img_preproc = preproc(img)
    calib_dataset[idx, :, :, :] = img_preproc.numpy()

np.save("calib_set.npy", calib_dataset)

How do I edit this so that I can use it for yolov8m (the input size of my custom model is 640 x 640)

Lastly, do I need to change anything in the line below as well?

alls = "normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375])\n"

Any help is appreciated. Thank you!

Yolo models have multiple outputs. So you need more end-node names.

You can use the same model from the Model Zoo as a reference. Call

hailomz parse yolov8m
hailo visualizer yolov8m.har

You can also open the HAR file in Netron.

In your model the names may be different. So open your ONNX and compare it to the HAR file.

I used Netron to visualize my onnx model and my har model.I believe that since I put “/model.22/Concat_3” as the only end node in the end_node_names argument, the parser only parsed the onnx model until that layer. However, the final concat layer in the visualization is “/model.22/Concat_5”. However, when I try to put “/model.22/Concat_3” and “/model.22/Concat_5”, or only “/model.22/Concat_5” in the end_node_names argument, I get an error that looks like this:

The only way I am able to run this cell without getting an error is by only providing “/model.22/Concat_3” as the singular argument. Which end nodes should I use?

Also, since the yolov8m.hef from the model zoo was compiled using the dfc, is it possible to provide the exact code used to do that, so I can adapt it for my use?

Thanks,
Ajitesh.