[BUG] YOLOv8n Model Compiled Successfully but Outputs Empty Detections on Raspberry Pi 5 + Hailo AI HAT

Hi everyone,

I’m using Hailo AI Software Suite Version 2025-04 with a Raspberry Pi 5 and Raspberry Pi AI HAT. I trained a custom YOLOv8n model (only 1 class) using Ultralytics and exported it to ONNX using the following code:

model.export(format="onnx", imgsz=640, dynamic=False, simplify=True, opset=11)

Then I followed the standard Hailo Model Zoo steps to convert the ONNX model into a HEF file:

hailomz parse --hw-arch hailo8l --ckpt /mnt/workspace/customYolov8nbest.onnx yolov8n

hailomz optimize --hw-arch hailo8l --calib-path /mnt/workspace/calibration_images yolov8n

hailomz compile \
  --hw-arch hailo8l \
  yolov8n

The HEF file is successfully generated.

However, when I run the model on my target device (Raspberry Pi 5 + Hailo AI HAT), it does not detect any objects at all. The debug output from my inference code looks like this:

------------------- DEBUG -------------------
Raw output tensor: []
Max coordinate value: 0
Max score (col 4): 0
Shape: (0, 5)
Frame shape: (272, 480, 3)
Preprocessed shape: (3, 640, 640)
Input name: yolov8n/input_layer1
Output name: yolov8n/yolov8_nms_postprocess
Output tensor dtype: float64
Preprocessed dtype: float32
------------------------------------------------

So the output tensor has shape (0, 5), meaning no detections. Even though the image preprocessing works and the pipeline runs without crashing, the model returns empty outputs.

My questions:

  1. Could the issue be caused by an incorrect postprocessing or NMS layer during the export or compilation?
  2. Is there a specific configuration required for single-class YOLOv8 models?
  3. Could the output_shape or layer names be mismatched during parse/optimize/compile steps?
  4. Should I explicitly set quantization parameters or data types (e.g., float32 vs uint8)?
  5. Has anyone encountered this issue with single-class models compiled for Hailo?

Any help would be appreciated :folded_hands: I can also share the ONNX model or python file if needed.

Thanks!

Hi @Aysenur_Gulsum
Please share the pytorch checkpoint and around 64 sample images for calibration. We will compile it on our end and check if we can replicate the issue. If our compile is successful, we will share the working .hef with you. Alternatively, you can try our cloud compiler: Early Access to DeGirum Cloud Compiler - General - Hailo Community

Hi @shashi,

Thanks a lot for your support!

Here is the Google Drive folder containing both the PyTorch checkpoint and 64 calibration images:

The model was trained using Ultralytics YOLOv8 (model.train) and is based on the YOLOv8n architecture. It was fine-tuned to detect only one custom class, which is Fixed-wing UAVs.

Please note that the target device for the compiled .hef file is a Raspberry Pi 5 paired with the Raspberry Pi AI HAT+ (the one with 26 TOPS of performance). So it would be great if the compilation is done with that target configuration in mind.

If you’re able to successfully compile it, I’d really appreciate it if you could also share the detailed steps or configuration you used — such as YAML files, partitioning strategy, calibration method, or any relevant logs. That would really help me debug and improve my setup on my end.

Thanks again for your help and time!

Best regards,
Aysenur

Hi @Aysenur_Gulsum, On checking the pytorch checkpoint you shared, it seems the checkpoint file is not saved properly. It doesn’t contain model weights, only training logs. Please see attached image from netron below.

Please verify if you are following correct steps to train and export your model using ultralytics repo:

  1. pip install ultralytics or clone the ultralytics repo
  2. Make sure your dataset is in correct YOLO format:
    dataset/
    ├── images/
    │ ├── train/
    │ └── val/
    ├── labels/
    │ ├── train/
    │ └── val/
    └── data.yaml
  3. The data.yaml should contain:
train: path/to/train/images
val: path/to/val/images

nc: 1  # number of classes
names: ['class0',]
  1. Run the following command:
yolo task=detect mode=train model=yolov8m.pt data=path/to/data.yaml epochs=100 imgsz=640
  • model=yolov8m.pt: starts from pretrained weights
  • You can also train from scratch using model=yolov8m.yaml
  1. After training completes, the weights will be saved in a directory like:
runs/detect/train/weights/
├── best.pt      # best performing model
└── last.pt      # model from the last epoch

You can copy best.pt for inference or deployment. Please share us this best.pt file so we can compile it for hailo.

Please let us know if you have any questions

Hello, due to some issues with my computer, it took me a bit longer to prepare the new model training. I have done the training according to the folder structure you provided. The only difference is that I used “fixedWing” instead of “class0”. The file has been uploaded as “best.pt” within the shared link. If the conversion to .hef is successful, could you also share the steps you followed for the conversion? Thank you very much for your help.

Drive Link: Hailo_pt_file_and_calibration_images - Google Drive

I used “yolov8m” as the model.
My device: Raspberry Pi 5 (16 GB) with AI Hat+ (26 TOPS).