[Help] Converting Custom YOLOv8n ONNX to HEF on Windows - Multiple Methods Attempted

Hello, I’m trying to convert a custom YOLOv8n ONNX model to HEF format on Windows. I’ve tried several methods but haven’t succeeded:

  1. Using hailo_ai_sw_suite_2024-10 docker container, I tried:
  • Direct command line conversion:
hailo parser onnx --input-files /input/best.onnx --output-har /output/model.har
hailo optimize --input /output/model.har --output /output/optimized.har --use-random-calib-set
hailo compiler --input /output/optimized.har --output /output/ice_pack_detector.hef
  1. Using Model Zoo approach:
hailomz parse yolov8n
hailomz optimize yolov8n
hailomz compile yolov8n --hw-arch hailo8
  1. Using DFC Studio web interface:
  • Ran docker with port mapping: docker run -it --rm -p 3000:3000 -v "C:\hailo_temp:/input" -v "C:\hailo_temp:/output" hailo_ai_sw_suite_2024-10:1
  • Accessed http://localhost:3000
  • Created new project but encountered parsing errors with Start/End node configuration

I’ve reviewed the official documentation (which is quite brief) but still can’t successfully convert my ONNX model to HEF.

Questions:

  1. Is there a simpler way to convert ONNX to HEF on Windows?
  2. What’s the recommended approach for Windows users?
  3. Are there any specific requirements or configurations I’m missing?

Environment:

  • Windows 10
  • Docker Desktop
  • YOLOv8n custom model (ONNX format)
  • Hailo AI Suite 2024-10 docker image
1 Like

Welcome to the Hailo Community!

I recommend to use the Hailo AI Software suite docker on Ubuntu directly. I use dual boot machines.

If you want to use it on Windows maybe this guide will help you with the installation under WSL2. This has been mainly developed to allow some Windows tools of partners to use the DFC under the hood.

How to install the Hailo Dataflow Compiler DFC on WSL2

I would recommend to work trough the tutorials first to understand the workflow. Inside the docker call:

hailo tutorial

This will open a Jupyter notebook server with notebooks for each step of the conversion process.

Usually the parser provides you some guidance on what start- and end-nodes to use if the ones you select are not supported. Please provide some more details on the error you encounter.

Dear Hailo Technical Support Team,

I am reaching out for assistance regarding an issue I am facing while trying to deploy a YOLOv8n model onto the Hailo-8L platform. Despite following the official documentation and guidelines, I have been unable to successfully convert and optimize the model.

Here are the steps I have taken:

  1. Successfully converted my YOLOv8n .pt model to the ONNX format.
  2. Attempted to use the following command for Hailo model compilation and quantization:
hailomz compile yolov8n \
    --ckpt=/local/workspace/best.onnx \
    --hw-arch=hailo8l \
    --calib-path=/local/workspace/calibration_images \
    --classes=2 \
    --performance

However, the process failed with the following error:

NMSConfigPostprocessException: The layer yolov8n/conv41 doesn't have one output layer

I have tried several solutions to resolve this issue, including:

  1. Using different export settings for the YOLOv8 model (e.g., adjusting the opset version during ONNX export).
  2. Checking the ONNX model structure to verify the output configuration of the conv41 layer.
  3. Tweaking the NMS configuration as per the error message, but without success.

From my inspection, it seems that the conv41 layer in the ONNX model has an output configuration that is incompatible with the Hailo SDK’s requirements. As this has been a persistent roadblock, I would greatly appreciate any guidance or suggestions your team could provide.

Here are the details of my development environment:

  • Operating System: Ubuntu 22.04, Docker version 24.0.7
  • Hailo SDK Version: 4.19.0
  • DFC Version: 3.29.0
  • YOLOv8n Model: Ultralytics 8.3.32

Thank you very much for your support, and I look forward to your reply.

(hailo_virtualenv) hailo@pt:/local/workspace$ hailomz compile yolov8n
cal/work> --ckpt=/local/workspace/best.onnx \

--hw-arch=hailo8l \
--calib-path=/local/workspace/calibration_images \
--classes=2 \
--performance

Start run for network yolov8n …
Initializing the hailo8l runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.32)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.0/cv2.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv3.2/cv3.2.2/Conv /model.22/cv2.2/cv2.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.81)
[info] Saved HAR to: /local/workspace/yolov8n.har
Using generic alls script found in /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls because there is no specific hardware alls
Preparing calibration data…
[info] Loading model script commands to yolov8n from /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
Traceback (most recent call last):
File “/local/workspace/hailo_virtualenv/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 250, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 324, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/local/workspace/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 310, in optimize_full_precision_model
runner.optimize_full_precision(calib_data=calib_feed_callback)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1996, in optimize_full_precision
self._optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 1999, in _optimize_full_precision
self._sdk_backend.optimize_full_precision(calib_data=calib_data, data_type=data_type)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1497, in optimize_full_precision
model, params = self._apply_model_modification_commands(model, params, update_model_and_params)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 1388, in _apply_model_modification_commands
model, params = command.apply(model, params, hw_consts=self.hw_arch.consts)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/nms_postprocess_command.py”, line 387, in apply
pp_creator = create_nms_postprocess(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1767, in create_nms_postprocess
pp_creator.prepare_hn_and_weights(hw_consts, engine, dfl_on_nn_core=dfl_on_nn_core)
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1125, in prepare_hn_and_weights
super().prepare_hn_and_weights(
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1089, in prepare_hn_and_weights
self.add_postprocess_layer_to_hn()
File “/local/workspace/hailo_virtualenv/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py”, line 1040, in add_postprocess_layer_to_hn
raise NMSConfigPostprocessException(f"The layer {encoded_layer.name} doesn’t have one output layer")
hailo_sdk_client.tools.core_postprocess.nms_postprocess.NMSConfigPostprocessException: The layer yolov8n/conv41 doesn’t have one output layer

Executing on device: 0000:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.19.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: <N/A>
Part Number: <N/A>
Product Name: <N/A>

dmesg | grep -i hailo
[ 2.616483] hailo_pci: loading out-of-tree module taints kernel.
[ 2.617906] hailo: Init module. driver version 4.19.0
[ 2.618007] hailo 0000:01:00.0: Probing on: 1e60:2864…
[ 2.618011] hailo 0000:01:00.0: Probing: Allocate memory for device extension, 11632
[ 2.618030] hailo 0000:01:00.0: enabling device (0000 → 0002)
[ 2.618036] hailo 0000:01:00.0: Probing: Device enabled
[ 2.618056] hailo 0000:01:00.0: Probing: mapped bar 0 - 00000000e5cc4495 16384
[ 2.618061] hailo 0000:01:00.0: Probing: mapped bar 2 - 00000000b9fce164 4096
[ 2.618066] hailo 0000:01:00.0: Probing: mapped bar 4 - 000000001fd40703 16384
[ 2.618069] hailo 0000:01:00.0: Probing: Force setting max_desc_page_size to 4096 (recommended value is 16384)
[ 2.618078] hailo 0000:01:00.0: Probing: Enabled 64 bit dma
[ 2.618081] hailo 0000:01:00.0: Probing: Using userspace allocated vdma buffers
[ 2.618085] hailo 0000:01:00.0: Disabling ASPM L0s
[ 2.618089] hailo 0000:01:00.0: Successfully disabled ASPM L0s
[ 2.620600] hailo 0000:01:00.0: Writing file hailo/hailo8_fw.bin
[ 2.713975] hailo 0000:01:00.0: File hailo/hailo8_fw.bin written successfully
[ 2.713985] hailo 0000:01:00.0: Writing file hailo/hailo8_board_cfg.bin
[ 2.714015] Failed to write file hailo/hailo8_board_cfg.bin
[ 2.714018] hailo 0000:01:00.0: File hailo/hailo8_board_cfg.bin written successfully
[ 2.714021] hailo 0000:01:00.0: Writing file hailo/hailo8_fw_cfg.bin
[ 2.714033] Failed to write file hailo/hailo8_fw_cfg.bin
[ 2.714035] hailo 0000:01:00.0: File hailo/hailo8_fw_cfg.bin written successfully
[ 2.852673] hailo 0000:01:00.0: Firmware loaded successfully
[ 2.881687] hailo 0000:01:00.0: Probing: Added board 1e60-2864, /dev/hailo0

Please refer to this post that addresses a similar question: