Issues with Converting Default YOLOv8n.pt Model to HEF Format

Issues with Converting Default YOLOv8n.pt Model to HEF Format

I’ve been trying to convert the default yolov8n.pt model to HEF format for deployment on Hailo8l. Below are the steps I’ve taken and the issues I’m encountering.
Steps Taken:

  • Convert YOLOv8n.pt to ONNX
from ultralytics import YOLO

# Configuration
model_name = "yolov8n.pt"
imgsize = 640

model = YOLO(model_name)
model_name = model_name.replace(".yaml", "")
onnx_path = model.export(format="onnx", half=False, int8=False, batch=1, opset=11, imgsz=640, device=0)
print(f"ONNX model exported to: {onnx_path}")
  • Check Layer Names
python

import onnx

# Load ONNX model
onnx_model_path = "yolov8n.onnx"
model = onnx.load(onnx_model_path)

# Print all input and output node names
print("Input Nodes:")
for input in model.graph.input:
    print(input.name)

print("\nOutput Nodes:")
for output in model.graph.output:
    print(output.name)

# Input Nodes:
# images

# Output Nodes:
# output0
  • Parse the Model

!hailo parser onnx /workspace/yolov8n.onnx --net-name yolov8n --har-path /workspace/yolov8n.har --start-node-names images --end-node-names /model.22/Sigmoid /model.22/dfl/Reshape_1 --hw-arch hailo8


Output:

[info] Current Time: 12:58:49, 08/08/24
[info] CPU: Architecture: x86_64, Model: Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, Number Of Cores: 12, Utilization: 1.2%
[info] Memory: Total: 31GB, Available: 24GB
[info] System info: OS: Linux, Kernel: 5.15.0-117-generic
[info] Hailo DFC Version: 3.28.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running `hailo parser onnx /workspace/yolov8n.onnx --net-name yolov8n --har-path /workspace/yolov8n.har --start-node-names images --end-node-names /model.22/Sigmoid /model.22/dfl/Reshape_1 --hw-arch hailo8`
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.29)
[info] Start nodes mapped from original model: 'images': 'yolov8n/input_layer1'.
[info] End nodes mapped from original model: '/model.22/Sigmoid', '/model.22/dfl/Reshape_1'.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.04)
[info] Saved HAR to: /workspace/yolov8n.har
  • Create Calibration Dataset Based on YOLO Data Format
import os
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import cv2

def preproc(image, output_height=640, output_width=640):
    resized_image = cv2.resize(image, (output_width, output_height))
    normalized_image = resized_image
    return normalized_image

images_path = "data/annotated/test/images"
images_list = [img_name for img_name in os.listdir(images_path) if os.path.splitext(img_name)[1] in [".jpg", ".png", ".jpeg"]][:100]

calib_dataset = np.zeros((len(images_list), 640, 640, 3))

for idx, img_name in enumerate(sorted(images_list)):
    img_path = os.path.join(images_path, img_name)
    img = np.array(Image.open(img_path))
    img_preproc = preproc(img)
    try:
        calib_dataset[idx, :, :, :] = img_preproc
    except:
        print(f"Wrong size for file: {img_name}")

output_path = "/workspace/calib_set.npy"
np.save(output_path, calib_dataset)
print(f"Calibration set saved to: {output_path}")

def show_images(images, titles=None, cols=5):
    n_images = len(images)
    if titles is None:
        titles = [''] * n_images
    fig, axes = plt.subplots(nrows=(n_images // cols), ncols=cols, figsize=(15, 5))
    for ax, img, title in zip(axes.flat, images, titles):
        ax.imshow(img / 255)
        ax.set_title(title)
        ax.axis('off')
    plt.tight_layout()
    plt.show()

original_images = [np.array(Image.open(os.path.join(images_path, img_name))) for img_name in images_list[:5]]
show_images(original_images, titles=[f"Original {i}" for i in range(1, 6)])

processed_images = [calib_dataset[i] for i in range(5)]
show_images(processed_images, titles=[f"Processed {i}" for i in range(1, 6)])
  • Optimize the Model
!hailo optimize /workspace/yolov8n.har --hw-arch hailo8 --calib-set-path /workspace/calib_set.npy --output-har-path /workspace/yolov8n_quantized_model.har
[info] Current Time: 12:59:08, 08/08/24
[info] CPU: Architecture: x86_64, Model: Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, Number Of Cores: 12, Utilization: 3.1%
[info] Memory: Total: 31GB, Available: 24GB
[info] System info: OS: Linux, Kernel: 5.15.0-117-generic
[info] Hailo DFC Version: 3.28.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running `hailo optimize /workspace/yolov8n.har --hw-arch hailo8 --calib-set-path /workspace/calib_set.npy --output-har-path /workspace/yolov8n_quantized_model.har`
[info] Starting Model Optimization
[warning] Reducing optimization level to 1 (the accuracy won't be optimized and compression won't be used) because there's less data than the recommended amount (1024)
[info] Model received quantization params from the hn
[info] Starting Mixed Precision
[info] Mixed Precision is done (completion time is 00:00:00.60)
[info] Layer Norm Decomposition skipped
[info] Starting Stats Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100%|█████████████████████████| 64/64 [00:25<00:00,  2.53entries/s]
[info] Stats Collector is done (completion time is 00:00:27.50)
[info] Starting Fix zp_comp Encoding
[info] Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] matmul_equalization skipped
[info] activation fitting started for yolov8n/ew_sub_softmax1/act_op
[info] activation fitting started for yolov8n/ne_activation_ew_sub_softmax1/act_op
[info] activation fitting started for yolov8n/reduce_sum_softmax1/act_op
[info] Finetune encoding skipped
[info] Starting Bias Correction
[info] The algorithm Bias Correction will use up to 1.15 GB of storage space
[info] Using dataset with 64 entries for Bias Correction
Bias Correction: 100%|█| 73/73 [02:28<00:00,  2.03s/blocks, Layers=['yolov8n/out
[info] Bias Correction is done (completion time is 00:02:30.61)
[info] Adaround skipped
[info] Fine Tune skipped
[info] Starting Layer Noise Analysis
Full Quant Analysis: 100%|████████████████| 2/2 [01:56<00:00, 58.15s/iterations]
[info] Layer Noise Analysis is done (completion time is 00:02:00.61)
[info] Output layers signal-to-noise ratio (SNR): measures the quantization noise (higher is better)
[info] 	yolov8n/output_layer1 SNR:	7.112 dB
[info] 	yolov8n/output_layer2 SNR:	6.991 dB
[info] Model Optimization is done
[info] Saved HAR to: /workspace/yolov8n_quantized_model.har
  1. Compile the Model
!hailo compiler /workspace/yolov8n_quantized_model.har --hw-arch hailo8 --output-dir /workspace
[info] Current Time: 13:06:21, 08/08/24
[info] CPU: Architecture: x86_64, Model: Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, Number Of Cores: 12, Utilization: 2.8%
[info] Memory: Total: 31GB, Available: 24GB
[info] System info: OS: Linux, Kernel: 5.15.0-117-generic
[info] Hailo DFC Version: 3.28.0
[info] HailoRT Version: Not Installed
[info] PCIe: No Hailo PCIe device was found
[info] Running `hailo compiler /workspace/yolov8n_quantized_model.har --hw-arch hailo8 --output-dir /workspace`
[info] Compiling network
[info] To achieve optimal performance, set the compiler_optimization_level to "max" by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[error] Failed to produce compiled graph
[error] TypeError: expected str, bytes or os.PathLike object, not NoneType

Questions:

  1. Parsing Issue with “output1” End Node: Why does the parsing fail when I specify “output1” as the end node? Does specifying different output nodes affect the architecture saved in the HAR format?
  2. Optimizations: Does it perform optimizations well?
  3. Compilation Error: What am I doing wrong that causes the compilation to fail with TypeError: expected str, bytes or os.PathLike object, not NoneType?
  4. Compatibility with YOLOv8-p6: Will this conversion process also work with the YOLOv8-p6 model?

Any help or guidance on these issues would be greatly appreciated. Thank you!

BTW you can download ready YOLOv8n.hef from hailo modelzoo on GitHub

Yes, I know, but I want to first go through the conversion of the basic yolov8 so that I can then compile the custom yolov8-p6 models

Hi @jan.filip,
From what I see, the end nodes you choose for the yolov8 model are not the recommended by Hailo. I understand that it’s what the Parser recommended, but this is a bug in the Parser and it will be fixed.
In general, for yolov8 there should be 6 end nodes given in the Parser step - those are two nodes of 3 branches. For example:


These nodes represent the the boxes and scores for each branch.

Try parsing your model with the 6 relevant end nodes. I believe that would solve your issue.

Regards,