Help with compiling yolov8n-cls model for Hailo-8L (custom yaml issue)

Hello,

I am trying to compile the yolov8n-cls model for Hailo-8L, but since there is no official YAML file in the Hailo Model Zoo, I created my own. However, I am having difficulties getting it to work.

Environment:

  • Device: Raspberry Pi 5 + Hailo-8L

  • hailort / hailo PCIe driver version: 4.22.0

  • Hailo Dataflow Compiler: 3.32.0

  • Hailo Model Zoo : 2.15

First attempt (able to generate hef, but accuracy is very low ~2–3% when tested with DeGirum):

yolo export model=yolov8n-cls.pt format=onnx
hailo parser onnx /home/mjss/Downloads/yolo_new/imgsz_640/yolov8n-cls.onnx --hw-arch hailo8l
hailo optimize yolov8n-cls.har --calib-set-path /home/mjss/Downloads/yolo_new/imgsz_640/expanded_imagenet_640.npy
hailo compiler yolov8n-cls_optimized.har --hw-arch hailo8l

Second attempt (using Hailo Model Zoo with custom yaml):

yolo export model=yolov8n-cls.pt format=onnx
hailomz compile --ckpt yolov8n-cls.onnx --yaml yolov8n-cls.yaml --classes 1000 --hw-arch hailo8l

Error message:

[info] No GPU chosen and no suitable GPU found, falling back to CPU.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1758860743.984283   29092 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1758860743.986946   29092 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
<Hailo Model Zoo INFO> Start run for network yolov8n-cls ...
<Hailo Model Zoo INFO> Initializing the hailo8l runner...
Traceback (most recent call last):
  File "/home/mjss/Downloads/hailo_ai_sw_suite/hailo_venv/bin/hailomz", line 33, in <module>
    sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
  File "/home/mjss/Downloads/hailo_model_zoo_v2.15/hailo_model_zoo/main.py", line 122, in main
    run(args)
  File "/home/mjss/Downloads/hailo_model_zoo_v2.15/hailo_model_zoo/main.py", line 111, in run
    return handlers[args.command](args)
  File "/home/mjss/Downloads/hailo_model_zoo_v2.15/hailo_model_zoo/main_driver.py", line 248, in compile
    _ensure_optimized(runner, logger, args, network_info)
  File "/home/mjss/Downloads/hailo_model_zoo_v2.15/hailo_model_zoo/main_driver.py", line 73, in _ensure_optimized
    _ensure_parsed(runner, logger, network_info, args)
  File "/home/mjss/Downloads/hailo_model_zoo_v2.15/hailo_model_zoo/main_driver.py", line 108, in _ensure_parsed
    parse_model(runner, network_info, ckpt_path=args.ckpt_path, results_dir=args.results_dir, logger=logger)
  File "/home/mjss/Downloads/hailo_model_zoo_v2.15/hailo_model_zoo/core/main_utils.py", line 126, in parse_model
    raise Exception(f"Encountered error during parsing: {err}") from None
Exception: Encountered error during parsing: Expecting value: line 1 column 1 (char 0)

Custom yaml yolov8n-cls.yaml:

parser:
  nodes:
  - images
  - output0
  start_node_shapes:
    images:
      - 1
      - 224
      - 224
      - 3
network:
  network_name: yolov8n-cls
info:
  task: classification
  framework: pytorch
  input_shape:
    - 640
    - 640
    - 3
  output_shape: 1000
evaluation:
  dataset_name: imagenet
  labels_offset: 0
  classes: 1000
  data_set: models_files/imagenet/2021-06-20/imagenet_val.tfrecord
preprocessing:
  network_type: classification
  input_conversion: RGB
  input_resize:
    resize_method: letterbox
    resize_dims:
      - 640
      - 640
      - 3
  normalization_params:
    normalize_in_net: true
    mean_list:
      - 0.0
      - 0.0
      - 0.0
    std_list:
      - 255.0
      - 255.0
      - 255.0
postprocessing:
  device_pre_post_layers:
    softmax: true
    argmax: false
    bilinear: false
    nms: false
quantization:
  calib_set:
    - models_files/imagenet/2021-06-20/imagenet_calib.tfrecord

Hey @minjoo_kim,

For models not in the model zoo, I’d recommend using the DFC directly for better control over compilation.

Your Solution:

  1. Wrong input size: YOLOv8n-cls expects 224×224, not 640. Use straight resize, not letterbox (letterbox is for detection).

  2. Double normalization: Your calibration .npy files are already [0..1] floats, but the YAML is dividing by 255 again during quantization. This destroys the quantization scale.

Quick fixes:

  • Export your model at 224×224
  • Use this in your YAML preprocessing:
parser:
  nodes:
    - images
    - output0
network:
  network_name: yolov8n-cls
info:
  task: classification
  framework: pytorch
  input_shape: [224, 224, 3]
  output_shape: 1000
evaluation:
  dataset_name: imagenet
  labels_offset: 0
  classes: 1000
preprocessing:
  network_type: classification
  input_conversion: RGB
  input_resize:
    resize_method: letterbox
    resize_dims: [224, 224, 3]
  normalization_params:
    normalize_in_net: true
    mean_list: [0.0, 0.0, 0.0]
    std_list: [255.0, 255.0, 255.0]
postprocessing:
  device_pre_post_layers:
    softmax: true
quantization:
  calib_set:
    - <path_to_calib.tfrecord>
  • Make sure your calibration images are uint8 [0..255], not pre-normalized floats

For node names: Run hailo parser onnx -y <your_model.onnx> to find the correct input/output node names for your YAML.

Let me know if you need the full YAML example!

@minjoo_kim

Hi Minjoo,

We’ve fixed the accuracy degradation issue in our (DeGirum) cloud compiler. If you still haven’t managed to get a good model, you can just compile again. Thanks for bringing this up to our attention!