Inference issue with finetuned yolov8n model

Hello I’m stuck since a few days and I would need some help.

I have fined tuned a yolov8n model to detect a specific object and then exported it to onnx. When I ran inference on my onnx model it showed good results.

I then parsed, optimized and compiled my onnx model to it’s final hef format using this script:

#!/usr/bin/env bash

# set -e

MODEL_NAME= best_02

# parse
hailo parser onnx \
    --har-path models/${MODEL_NAME}.har \
    --hw-arch hailo8 \
    --end-node-names "/model.22/cv2.0/cv2.0.2/Conv" "/model.22/cv3.0/cv3.0.2/Conv" "/model.22/cv2.1/cv2.1.2/Conv" "/model.22/cv3.1/cv3.1.2/Conv" "/model.22/cv2.2/cv2.2.2/Conv" "/model.22/cv3.2/cv3.2.2/Conv" \
    -- \
    models/${MODEL_NAME}.onnx

# --end-node-names "/model.22/Concat_3"

# optimize
hailo optimize \
    --output-har-path models/${MODEL_NAME}_calib.har \
    --hw-arch hailo8 \
    --use-random-calib-set \
    --model-script models/${MODEL_NAME}.alls \
    models/${MODEL_NAME}.har

# 
#  hailo optimize \
#     --output-har-path ${MODEL_NAME}_calib.har \
#     --hw-arch hailo8 \
#     --calib-set-path calibration_data.npy \
#     --model-script model_scripts/optimization_1_compression_0.alls \
#     models/${MODEL_NAME}.har

# compile
hailo compiler \
    --output-dir models/ \
    --output-har-path ${MODEL_NAME}_compiled.har \
    --hw-arch hailo8 \
    --model-script models/${MODEL_NAME}.alls \
    models/${MODEL_NAME}_calib.har
The .alls file was this:
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
#nms_postprocess("../../postprocess_config/yolov8n_nms_config.json", meta_arch=yolov8, engine=gpu)

allocator_param(width_splitter_defuse=disabled)

However I can’t find a way to make my inference work using a jpg test image. I have applied nms during the optimization of my model (when asked do you want to apply y/n). Now, I have tried to write my own custom inference script following the hailo tutorial on jupyter but I either get error messages or get output images without any detecting boxes on them.
I also tried running the object detection script from the hailo application code exemple but that also always gave me error messages (wrong output shapes or other).

Is there anything you recommend for the inference of a finetuned yolo model? I’m using the hailo 8m chip and the 4.19 hailo dfc and hailort.
Thank you!

Hi @psimon,
You are using random calibration in your script:
--use-random-calib-set
This would render your HEF good only for measuring FPS, but nothing that require accuracy.

Please, use real dataset, and perhaps also the model-zoo for optimization. After that, you can also use the model-zoo for testing your compiled or optimized model on real image.

Hi @Nadav,

Thanks for the quick reply! My bad the script I wrote here was not the exact same one I used. In fact I did use my own calibration data set (1024 images).

However,
I tried as you said and recompiled my onnx model with the hailomz cli commands (parsing, optimizing, compiling to hef.)

For the testing on real images I used this command:
hailomz eval yolov8n --har /hailo_model_zoo/yolov8n.har --target emulator --data-path test_image.jpg but it seems I can’t use my own personal data and can only use a specific dataset given by hailo model zoo?

Also, the converting to hef was done on an ipc that does not have the hailo 8 chip in it. Then I exported the hef model to the hardware that has the hailo chip in it but this hardware does not have AVX so I can’t use the hailomz commands there.
Could you explain me what would be the best way to run the inference with my hef model now? What is the best example to follow?

Thank you!

I believe that the --data-path argument expects a directory containing images.

You can still evaluate the optimized/qunatized model w\o access to the Hailo-8 device, using our emulator. This capability is embedded in the hailomz command with the --target flag.
I don’t think that AVX is a must, you should only get a warning about it, not an error.
If your HEF model is one of the models support in the examples (like detection), you better start with it. If not, you would need to convert/bring-up the post processing of your model from the respective GIT/repo.

1 Like

Hello and thank you for the tips.

My inference with an input image now worked and I used the Python infer pipeline.
If this can help anyone else the issue was that my model was fine tuned to only detect one class so the class ID was missing but when adding class_id = 0 it worked.
Also, the bounding boxes were in normalized format but needed to be converted to pixels.