convert to .hef for hailo8l Unable to Compile YOLOv8 ONNX Model with HailoMZ – Model Script Not Found

Subject: Unable to Compile YOLOv8 ONNX Model with HailoMZ – Model Script Not Found

Description:

Hello Hailo Support Team,

I am trying to compile a custom YOLOv8 ONNX model (best_bird.onnx) using Hailo Model Zoo (hailomz) with the following YAML configuration: /mnt/ramdisk/best_bird.yaml. The HAR file (yolov8n_custom.har) is successfully generated, but the compilation fails during the calibration step with the following error:

hailo_sdk_client.runner.exceptions.InvalidArgumentsException: either model script is illegal or file path doesn’t exist: Model script parsing failed: ‘NoneType’ object has no attribute ‘replace’. Model script file not found in location: None.

What I have tried:

  1. Verified that the ONNX file exists: /mnt/ramdisk/best_bird.onnx.

  2. Verified that the HAR file exists: /home/gpu/hailo_model_zoo/yolov8n_custom.har.

  3. Ensured that the YAML file points to the correct paths for onnx_model_path, hef_path, and model_script_path.

  4. Attempted running hailomz compile multiple times in a conda environment (hailo) with all required dependencies installed.

  5. Confirmed that TensorFlow is installed and calibration dataset has been prepared (calib2017).

Additional Information:

  • Hailo Model Zoo version: [insert version]

  • Hailo SDK Client version: [insert version]

  • Operating System: [insert OS]

  • Python version: 3.10

  • GPU: Nvidia A2, CUDA drivers not used during compilation

  • All paths in YAML are absolute and files exist

Issue:

Despite the HAR being generated successfully, the compilation fails when trying to load the model script for optimization. The error seems to indicate that the script path is None or cannot be parsed, even though model_script_path in the YAML is correctly set.

Request:

Please advise how to resolve this issue so that I can successfully compile my YOLOv8 ONNX model for Hailo8L.

Guidance on Converting YOLOv8n.pt to .hef for Hailo-8L

Hello,

I have a custom-trained YOLOv8n model (best.pt) and I would like to convert it into a Hailo Executable File (.hef) for deployment on a Hailo-8L accelerator. Could you please provide guidance or point me to documentation/tutorials that explain:

  1. How to export a YOLOv8n PyTorch model to ONNX for Hailo.

  2. How to parse, optimize, and compile the model into a .hef file.

  3. How to run the resulting .hef on a Hailo-8L device.

Thank you for your support.train: ../train/images
val: ../valid/images

nc: 1
names: [‘1’]

import torch
from ultralytics import YOLO
import os

def main():

Проверка доступных GPU

if torch.cuda.is_available():
gpus = list(range(torch.cuda.device_count()))
device = “,”.join(map(str, gpus)) # “0,1,2,3,4,5,6,7”
else:
device = “cpu”
gpus =

print(f" Используем устройство: {device}, GPU: {len(gpus)}")

# Путь к директории с предыдущими результатами обучения
checkpoint_dir = "runs/detect/train"

# Определение последней сохранённой модели для продолжения обучения
last_checkpoint = None
if os.path.exists(checkpoint_dir):
    checkpoints = [f for f in os.listdir(checkpoint_dir) if f.endswith(".pt")]
    if checkpoints:
        last_checkpoint = os.path.join(checkpoint_dir, sorted(checkpoints)[-1])
        print(f" Продолжаем обучение с модели: {last_checkpoint}")
    else:
        print("⚠️ Нет сохранённых моделей для продолжения обучения.")
else:
    print("⚠️ Директория с результатами обучения не найдена.")

# Загрузка модели
model = YOLO("yolov8n.pt")  # или путь к собственной модели

# Если есть сохранённая модель, продолжаем с неё
if last_checkpoint:
    model.load(last_checkpoint)

# Обучение
model.train(
    data="dataset_bird.yaml",
    epochs=300,
    imgsz=640,
    batch=512,
    workers=8,
    device=device,
    optimizer="AdamW",
    lr0=0.0005,
    patience=50,
    cos_lr=True,
    augment=True,
    dropout=0.1
)

if name == “main”:
main()

yolov8n.pt

(hailo) gpu@gpu-serv:~$ ./bash1
=== HailoMZ Diagnostics ===

[1] Системная информация
Linux gpu-serv 6.8.0-79-generic #79~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 15 16:54:53 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

Python:
Python 3.10.12
/usr/bin/python3

[2] Conda environment

conda environments:

base /home/gpu/anaconda3
cronos /home/gpu/anaconda3/envs/cronos
deepseek-env /home/gpu/anaconda3/envs/deepseek-env
extract /home/gpu/anaconda3/envs/extract
fastapi /home/gpu/anaconda3/envs/fastapi
hailo * /home/gpu/anaconda3/envs/hailo
json /home/gpu/anaconda3/envs/json
lancedb-env /home/gpu/anaconda3/envs/lancedb-env
llama_env /home/gpu/anaconda3/envs/llama_env
manticore_env /home/gpu/anaconda3/envs/manticore_env
mongodb /home/gpu/anaconda3/envs/mongodb
new_env /home/gpu/anaconda3/envs/new_env
normalize /home/gpu/anaconda3/envs/normalize
sd-env /home/gpu/anaconda3/envs/sd-env
sqllite /home/gpu/anaconda3/envs/sqllite
ultralytics-env /home/gpu/anaconda3/envs/ultralytics-env
yolo /home/gpu/anaconda3/envs/yolo

packages in environment at /home/gpu/anaconda3/envs/hailo:

Name Version Build Channel

[3] HailoMZ и SDK версии
WARNING: Package(s) not found: hailo-sdk-client, hailo-sdk-common
Name: hailo-model-zoo
Version: 2.16.0
Summary: Hailo machine learning utilities and examples
Home-page: https://hailo.ai/
Author: Hailo team
Author-email: hailo_model_zoo@hailo.ai
License: MIT
Location: /home/gpu/hailo_model_zoo
Editable project location: /home/gpu/hailo_model_zoo
Requires: detection-tools, imageio, lap, matplotlib, motmetrics, numba, numpy, nuscenes-devkit, omegaconf, opencv-python, pillow, pycocotools, pyquaternion, scikit-image, scikit-learn, scipy, Shapely, termcolor, tqdm
Required-by:

[4] Пути к файлам модели
ONNX модель:
-rwxr-xr-x 1 gpu gpu 12M сен 15 06:24 /mnt/ramdisk/best_bird.onnx
HAR скрипт:
-rw-rw-r-- 1 gpu gpu 12M сен 15 07:39 /home/gpu/hailo_model_zoo/yolov8n_custom.har
HEF путь из YAML:
hef_path: /mnt/ramdisk/best_bird.hef
hef_path: /mnt/ramdisk/best_bird.hef

[5] YAML конфигурация
base:

  • base/yolov8.yaml

postprocessing:
device_pre_post_layers:
nms: true # включаем NMS на устройстве
hpp: true # hardware post-processing

network:
network_name: yolov8n_custom
onnx_model_path: /mnt/ramdisk/best_bird.onnx
input_names: [“images”]
output_names: [“output0”]

parser:
nodes:

  • null # стартовый узел — Hailo сам берёт input
    • output0 # конечные узлы — можно оставить просто output0, Hailo пропустит

compile:
hef_path: /mnt/ramdisk/best_bird.hef
hw_arch: hailo8l
model_script_path: /mnt/ramdisk/yolov8n_custom.har

dataset:
path: /mnt/ramdisk/Dataset_bird_2/coco_calib2017.tfrecord
calibration_entries: 64

paths:
network_path:

  • /mnt/ramdisk/best_bird.onnx
    hef_path: /mnt/ramdisk/best_bird.hef
    dataset_path: /mnt/ramdisk/Dataset_bird_2

[6] Последние 50 строк логов компиляции
Логи компиляции не найдены

[7] Проверка доступности файлов
OK: /mnt/ramdisk/best_bird.onnx существует
OK: /home/gpu/hailo_model_zoo/yolov8n_custom.har существует
OK: /mnt/ramdisk/best_bird.yaml существует

[8] Попытка компиляции (с выводом в файл)
hailomz compile --yaml /mnt/ramdisk/best_bird.yaml > ~/hailo_compile_output.log 2>&1
Вывод будет в ~/hailo_compile_output.log

(base) gpu@gpu-serv:/mnt/ramdisk$ ls
avidevelopment_ds Dataset_bird_2.zip
avidevelopment_ds.zip ds
best_bird.onnx empty.py
best_bird.pt hailo_dataflow_compiler-3.32.0-py3-none-linux_x86_64.whl
best_bird.yaml onnxruntime_1.py
best_dron.onnx pycache
best_dron.pt yolov8n_custom.har
Dataset_bird_2
(base) gpu@gpu-serv:/mnt/ramdisk$

hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$ hailomz compile --yaml /mnt/ramdisk/best_bird.yaml
[info] No GPU chosen, Selected GPU 0
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1757912188.262525 2654954 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1757912188.271005 2654954 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
/home/gpu/.local/lib/python3.10/site-packages/matplotlib/projections/init.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
warnings.warn(“Unable to import Axes3D. This may be due to multiple versions of "
/usr/lib/python3/dist-packages/pythran/tables.py:4520: FutureWarning: In the future np.bool will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, method):
/usr/lib/python3/dist-packages/pythran/tables.py:4553: FutureWarning: In the future np.bytes will be defined as the corresponding NumPy scalar.
obj = getattr(themodule, elem)
Start run for network yolov8n_custom …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n_custom
[info] Restored ONNX model yolov8n_custom (completion time: 00:00:00.08)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.26)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.84)
[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n_custom
[info] Restored ONNX model yolov8n_custom (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.22)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n_custom/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n_custom (completion time: 00:00:00.88)
[info] Translation started on ONNX model yolov8n_custom
[info] Restored ONNX model yolov8n_custom (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.24)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n_custom/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n_custom (completion time: 00:00:00.90)
[info] Appending model script commands to yolov8n_custom from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /home/gpu/hailo_model_zoo/hailo_model_zoo/yolov8n_custom.har
Preparing calibration data…
Traceback (most recent call last):
File “/home/gpu/.local/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 248, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 351, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 315, in optimize_full_precision_model
runner.load_model_script(model_script)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 498, in load_model_script
raise InvalidArgumentsException(f"either model script is illegal or file path doesn’t exist: {err_info}”)
hailo_sdk_client.runner.exceptions.InvalidArgumentsException: either model script is illegal or file path doesn’t exist: Model script parsing failed: ‘NoneType’ object has no attribute ‘replace’. Model script file not found in location: None.
(hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$

best_bird.yaml

base:

  • base/yolov8.yaml

postprocessing:
device_pre_post_layers:
nms: true # включаем NMS на устройстве
hpp: true # hardware post-processing

network:
network_name: yolov8n_custom
onnx_model_path: /mnt/ramdisk/best_bird.onnx
input_names: [“images”]
output_names: [“output0”]

parser:
nodes:

  • null # стартовый узел — Hailo сам берёт input
    • output0 # конечные узлы — можно оставить просто output0, Hailo пропустит

compile:
hef_path: /mnt/ramdisk/best_bird.hef
hw_arch: hailo8l
model_script_path: /mnt/ramdisk/yolov8n_custom.har

dataset:
path: /mnt/ramdisk/Dataset_bird_2/coco_calib2017.tfrecord
calibration_entries: 64

paths:
network_path:

  • /mnt/ramdisk/best_bird.onnx
    hef_path: /mnt/ramdisk/best_bird.hef
    dataset_path: /mnt/ramdisk/Dataset_bird_2

hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$ yolo export model=/mnt/ramdisk/best_bird.pt format=onnx opset=13 simplify=True dynamic=False
Ultralytics 8.3.195  Python-3.10.12 torch-2.1.0+cu118 CPU (Intel Xeon CPU E5-2698 v4 @ 2.20GHz)
Model summary (fused): 72 layers, 3,005,843 parameters, 0 gradients, 8.1 GFLOPs

PyTorch: starting from ‘/mnt/ramdisk/best_bird.pt’ with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (6.0 MB)

ONNX: starting export with onnx 1.16.0 opset 13…
ONNX: slimming with onnxslim 0.1.68…
ONNX: export success :white_check_mark: 1.5s, saved as ‘/mnt/ramdisk/best_bird.onnx’ (11.7 MB)

Export complete (1.8s)
Results saved to /mnt/ramdisk
Predict: yolo predict task=detect model=/mnt/ramdisk/best_bird.onnx imgsz=640
Validate: yolo val task=detect model=/mnt/ramdisk/best_bird.onnx imgsz=640 data=dataset_bird.yaml
Visualize: https://netron.app
 Learn more at Model Export with Ultralytics YOLO - Ultralytics YOLO Docs
(hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$

Hi @Anton_Pivovarov

From a quick look, this seems to be the issue:

The HAR file is an intermediate format that contains the parsed, optimized or compiled model (it is not a script).

Instead, the model script (or alls script) is an additional file that can be used during the conversion to include additional pre/postprocessing commands (normalization. NMS, …) to the model.
When running the conversion from the Model Zoo, using the hailomz tool, you can either pass the model script as a command line argument or set it in the YAML. Please check (Model Zoo documentation](hailo_model_zoo/docs/YAML.rst at master · hailo-ai/hailo_model_zoo · GitHub) and the YoloV8s example: you can see that the model script is set as below:

paths:
  network_path:
  - models_files/ObjectDetection/Detection-COCO/yolo/yolov8s/2023-02-02/yolov8s.onnx
  alls_script: yolov8s.alls
  url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ObjectDetection/Detection-COCO/yolo/yolov8s/2023-02-02/yolov8s.zip

Some default mode script are available on GitHub

1 Like

(yolo) gpu@gpu-serv:/mnt/ramdisk/to_hef$ hailomz compile --ckpt yolov8n.onnx --yaml yolov8n.yaml --hw-arch hailo8l
[info] No GPU chosen, Selected GPU 0
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1758260291.103973 264977 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1758260291.111630 264977 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
/home/gpu/.local/lib/python3.10/site-packages/matplotlib/projections/init.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
warnings.warn("Unable to import Axes3D. This may be due to multiple versions of "
/usr/lib/python3/dist-packages/pythran/tables.py:4520: FutureWarning: In the future np.bool will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, method):
/usr/lib/python3/dist-packages/pythran/tables.py:4553: FutureWarning: In the future np.bytes will be defined as the corresponding NumPy scalar.
obj = getattr(themodule, elem)
Start run for network yolov8n …
Initializing the hailo8l runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.11)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.28)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.79)
[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.08)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.25)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.90)
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.08)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.28)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.91)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /mnt/ramdisk/to_hef/yolov8n.har
Preparing calibration data…
[info] Loading model script commands to yolov8n from /mnt/ramdisk/to_hef/yolov8n.alls
Traceback (most recent call last):
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 381, in parse_script
script_grammar.parseString(input_script, parseAll=True)
File “/usr/lib/python3/dist-packages/pyparsing.py”, line 1955, in parseString
raise exc
File “/usr/lib/python3/dist-packages/pyparsing.py”, line 3814, in parseImpl
raise ParseException(instring, loc, self.errmsg, self)
pyparsing.ParseException: Expected end of text, found ‘n’ (at char 365), (line:9, col:1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/gpu/.local/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 248, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 351, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 315, in optimize_full_precision_model
runner.load_model_script(model_script)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 502, in load_model_script
self._sdk_backend.load_model_script_from_file(model_script, append)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py”, line 494, in load_model_script_from_file
self._script_parser.parse_script_from_file(model_script_path, nms_config, append)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 312, in parse_script_from_file
return self.parse_script(f.read(), append, nms_config_file)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/script_parser/model_script_parser.py”, line 389, in parse_script
raise BackendScriptParserException(f"Parsing failed at:\n{e.markInputline()}")
hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendScriptParserException: Parsing failed at:

!<nms_postprocess_end_nodes=[
(yolo) gpu@gpu-serv:/mnt/ramdisk/to_hef$

“normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess(“/mnt/ramdisk/to_hef/postprocess_config/yolov8n_nms_config.json”, meta_arch=yolov8, engine=cpu)

allocator_param(width_splitter_defuse=disabled)

Правильный вариант

nms_postprocess_end_nodes = [
“/model.22/cv2.0/cv2.0.2/Conv”,
“/model.22/cv3.0/cv3.0.2/Conv”,
“/model.22/cv2.1/cv2.1.2/Conv”,
“/model.22/cv3.1/cv3.1.2/Conv”,
“/model.22/cv2.2/cv2.2.2/Conv”,
“/model.22/cv3.2/cv3.2.2/Conv”
]“ - yolov8n.alls , “base:

  • base/yolov8.yaml
    postprocessing:
    device_pre_post_layers:
    nms: true
    hpp: true
    network:
    network_name: yolov8n
    paths:
    network_path:
    • /mnt/ramdisk/to_hef/yolov8n.onnx
      alls_script: /mnt/ramdisk/to_hef/yolov8n.alls
      url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ObjectDetection/Detection-COCO/yolo/yolov8n/2023-01-30/yolov8n.zip
      info:
      task: object detection
      input_shape: 640x640x3
      output_shape: 80x5x100
      operations: 8.74G
      parameters: 3.2M
      framework: pytorch
      training_data: coco train2017
      validation_data: coco val2017
      eval_metric: mAP
      full_precision_result: 37.02
      source: GitHub - ultralytics/ultralytics: Ultralytics YOLO 🚀
      license_url: ultralytics/LICENSE at main · ultralytics/ultralytics · GitHub
      license_name: AGPL-3.0“ - yolov8n.yaml . “{
      “nms_scores_th”: 0.2,
      “nms_iou_th”: 0.7,
      “image_dims”: [
      640,
      640
      ],
      “max_proposals_per_class”: 100,
      “classes”: 80,
      “regression_length”: 16,
      “background_removal”: false,
      “bbox_decoders”: [
      {
      “name”: “yolov8n/bbox_decoder41”,
      “stride”: 8,
      “reg_layer”: “yolov8n/conv41”,
      “cls_layer”: “yolov8n/conv42”
      },
      {
      “name”: “yolov8n/bbox_decoder52”,
      “stride”: 16,
      “reg_layer”: “yolov8n/conv52”,
      “cls_layer”: “yolov8n/conv53”
      },
      {
      “name”: “yolov8n/bbox_decoder62”,
      “stride”: 32,
      “reg_layer”: “yolov8n/conv62”,
      “cls_layer”: “yolov8n/conv63”
      }
      ]
      }“ - yolov8n_nms_config.json .

Title: How to convert YOLOv8 .pt to Hailo8L format and run on Raspberry Pi 5

Body:
Hello,

I am trying to deploy a custom YOLOv8n model for bird detection on a Raspberry Pi 5 with a Hailo8L accelerator. Here is what I have tried so far:

  1. Converted .pt PyTorch model to ONNX (yolov8n.onnx) using Ultralytics export.

  2. Attempted to compile ONNX with Hailo Model Zoo (hailomz compile --ckpt yolov8n.onnx --yaml yolov8n.yaml --hw-arch hailo8l).

    • Initially received errors related to missing NMS configuration:Post-process config file isn’t found …/yolov8n_nms_config.json

    • Tried adding nms_postprocess_end_nodes manually in .alls script, but parser failed with:BackendScriptParserException: Parsing failed at: !<nms_postprocess_end_nodes=[

Corrected .alls syntax, yet calibration step fails with TensorFlow error:Tried to convert ‘input’ to a tensor and failed. Error: None values not supported

My questions:

  1. What is the proper workflow to convert a YOLOv8 .pt model to Hailo8L compatible format (HAR/HEF) for deployment?

  2. How should I correctly set up NMS in .alls and .json files for YOLOv8 on Hailo8L?

  3. How can I successfully run the compiled model on Raspberry Pi 5 for bird detection?

  4. Why is the workflow so complicated just to run object detection on Hailo? What is the reasoning behind all these format conversions and manual adjustments?

Any guidance, examples, or step-by-step instructions would be greatly appreciated.

This is the problem triggering the !<nms_postprocess_end_nodes=[ error.
There is no such command for the model script. Please check the Model Script section in the Hailo DataFlow Compiler User Guide.

Please clarify the content of the model script you are using right now. I see that you marked the lines above as “yolov8n.alls”, but you also mentioned this:

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess(“/mnt/ramdisk/to_hef/postprocess_config/yolov8n_nms_config.json”, meta_arch=yolov8, engine=cpu)

Which one are you using?

How to solve the issue

  • The end nodes are already extracted by the hailomz tool during parsing:

    End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’,  
    ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’,
    '/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’,
    ‘/model.22/cv3.2/cv3.2.2/Conv’.
    
  • In the model script, enable the nms_postprocess command, pointing to the NMS config JSON:

    nms_postprocess(“/mnt/ramdisk/to_hef/postprocess_config/yolov8n_nms_config.json”, meta_arch=yolov8, engine=cpu)
    
  • Please check that the reg_layer and the cls_layer specified in the NMS config JSON match the names in your parsed HAR file. You can inspect the HAR with netron tool or with the hailo visualizer <HAR_PATH> command

Suggestions
Since you are using a custom model, the Model Zoo flow (via the hailomz tool) may less intuitive, since it requires the user to modify different fields (dataset, number of classes, …).
I would recommend to follow the Hailo DataFlow Compiler User Guide and go through the Parsing/Optimization/Compilation steps one by one, using the DFC APIs/tools rather than the hailomz. This will give you better control on what is going on during the conversion.
You can run hailo tutorial command from within the Hailo AI SW Suite to access a Tutorial section that will guide you in the conversion process using Python APIs.