Subject: Unable to Compile YOLOv8 ONNX Model with HailoMZ – Model Script Not Found
Description:
Hello Hailo Support Team,
I am trying to compile a custom YOLOv8 ONNX model (best_bird.onnx) using Hailo Model Zoo (hailomz) with the following YAML configuration: /mnt/ramdisk/best_bird.yaml. The HAR file (yolov8n_custom.har) is successfully generated, but the compilation fails during the calibration step with the following error:
hailo_sdk_client.runner.exceptions.InvalidArgumentsException: either model script is illegal or file path doesn’t exist: Model script parsing failed: ‘NoneType’ object has no attribute ‘replace’. Model script file not found in location: None.
What I have tried:
- 
Verified that the ONNX file exists:
/mnt/ramdisk/best_bird.onnx. - 
Verified that the HAR file exists:
/home/gpu/hailo_model_zoo/yolov8n_custom.har. - 
Ensured that the YAML file points to the correct paths for
onnx_model_path,hef_path, andmodel_script_path. - 
Attempted running
hailomz compilemultiple times in a conda environment (hailo) with all required dependencies installed. - 
Confirmed that TensorFlow is installed and calibration dataset has been prepared (
calib2017). 
Additional Information:
- 
Hailo Model Zoo version: [insert version]
 - 
Hailo SDK Client version: [insert version]
 - 
Operating System: [insert OS]
 - 
Python version: 3.10
 - 
GPU: Nvidia A2, CUDA drivers not used during compilation
 - 
All paths in YAML are absolute and files exist
 
Issue:
Despite the HAR being generated successfully, the compilation fails when trying to load the model script for optimization. The error seems to indicate that the script path is None or cannot be parsed, even though model_script_path in the YAML is correctly set.
Request:
Please advise how to resolve this issue so that I can successfully compile my YOLOv8 ONNX model for Hailo8L.
Guidance on Converting YOLOv8n.pt to .hef for Hailo-8L
Hello,
I have a custom-trained YOLOv8n model (best.pt) and I would like to convert it into a Hailo Executable File (.hef) for deployment on a Hailo-8L accelerator. Could you please provide guidance or point me to documentation/tutorials that explain:
- 
How to export a YOLOv8n PyTorch model to ONNX for Hailo.
 - 
How to parse, optimize, and compile the model into a
.heffile. - 
How to run the resulting
.hefon a Hailo-8L device. 
Thank you for your support.train: ../train/images
val: ../valid/images
nc: 1
names: [‘1’]
import torch
from ultralytics import YOLO
import os
def main():
Проверка доступных GPU
if torch.cuda.is_available():
gpus = list(range(torch.cuda.device_count()))
device = “,”.join(map(str, gpus))  # “0,1,2,3,4,5,6,7”
else:
device = “cpu”
gpus = 
print(f" Используем устройство: {device}, GPU: {len(gpus)}")
# Путь к директории с предыдущими результатами обучения
checkpoint_dir = "runs/detect/train"
# Определение последней сохранённой модели для продолжения обучения
last_checkpoint = None
if os.path.exists(checkpoint_dir):
    checkpoints = [f for f in os.listdir(checkpoint_dir) if f.endswith(".pt")]
    if checkpoints:
        last_checkpoint = os.path.join(checkpoint_dir, sorted(checkpoints)[-1])
        print(f" Продолжаем обучение с модели: {last_checkpoint}")
    else:
        print("⚠️ Нет сохранённых моделей для продолжения обучения.")
else:
    print("⚠️ Директория с результатами обучения не найдена.")
# Загрузка модели
model = YOLO("yolov8n.pt")  # или путь к собственной модели
# Если есть сохранённая модель, продолжаем с неё
if last_checkpoint:
    model.load(last_checkpoint)
# Обучение
model.train(
    data="dataset_bird.yaml",
    epochs=300,
    imgsz=640,
    batch=512,
    workers=8,
    device=device,
    optimizer="AdamW",
    lr0=0.0005,
    patience=50,
    cos_lr=True,
    augment=True,
    dropout=0.1
)
if name == “main”:
main()
yolov8n.pt
(hailo) gpu@gpu-serv:~$ ./bash1
=== HailoMZ Diagnostics ===
[1] Системная информация
Linux gpu-serv 6.8.0-79-generic #79~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 15 16:54:53 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Python:
Python 3.10.12
/usr/bin/python3
[2] Conda environment
conda environments:
base                     /home/gpu/anaconda3
cronos                   /home/gpu/anaconda3/envs/cronos
deepseek-env             /home/gpu/anaconda3/envs/deepseek-env
extract                  /home/gpu/anaconda3/envs/extract
fastapi                  /home/gpu/anaconda3/envs/fastapi
hailo                 *  /home/gpu/anaconda3/envs/hailo
json                     /home/gpu/anaconda3/envs/json
lancedb-env              /home/gpu/anaconda3/envs/lancedb-env
llama_env                /home/gpu/anaconda3/envs/llama_env
manticore_env            /home/gpu/anaconda3/envs/manticore_env
mongodb                  /home/gpu/anaconda3/envs/mongodb
new_env                  /home/gpu/anaconda3/envs/new_env
normalize                /home/gpu/anaconda3/envs/normalize
sd-env                   /home/gpu/anaconda3/envs/sd-env
sqllite                  /home/gpu/anaconda3/envs/sqllite
ultralytics-env          /home/gpu/anaconda3/envs/ultralytics-env
yolo                     /home/gpu/anaconda3/envs/yolo
packages in environment at /home/gpu/anaconda3/envs/hailo:
Name Version Build Channel
[3] HailoMZ и SDK версии
WARNING: Package(s) not found: hailo-sdk-client, hailo-sdk-common
Name: hailo-model-zoo
Version: 2.16.0
Summary: Hailo machine learning utilities and examples
Home-page: https://hailo.ai/
Author: Hailo team
Author-email: hailo_model_zoo@hailo.ai
License: MIT
Location: /home/gpu/hailo_model_zoo
Editable project location: /home/gpu/hailo_model_zoo
Requires: detection-tools, imageio, lap, matplotlib, motmetrics, numba, numpy, nuscenes-devkit, omegaconf, opencv-python, pillow, pycocotools, pyquaternion, scikit-image, scikit-learn, scipy, Shapely, termcolor, tqdm
Required-by:
[4] Пути к файлам модели
ONNX модель:
-rwxr-xr-x 1 gpu gpu 12M сен 15 06:24 /mnt/ramdisk/best_bird.onnx
HAR скрипт:
-rw-rw-r-- 1 gpu gpu 12M сен 15 07:39 /home/gpu/hailo_model_zoo/yolov8n_custom.har
HEF путь из YAML:
hef_path: /mnt/ramdisk/best_bird.hef
hef_path: /mnt/ramdisk/best_bird.hef
[5] YAML конфигурация
base:
- base/yolov8.yaml
 
postprocessing:
device_pre_post_layers:
nms: true    # включаем NMS на устройстве
hpp: true      # hardware post-processing
network:
network_name: yolov8n_custom
onnx_model_path: /mnt/ramdisk/best_bird.onnx
input_names: [“images”]
output_names: [“output0”]
parser:
nodes:
- null # стартовый узел — Hailo сам берёт input
 - 
- output0 # конечные узлы — можно оставить просто output0, Hailo пропустит
 
 
compile:
hef_path: /mnt/ramdisk/best_bird.hef
hw_arch: hailo8l
model_script_path: /mnt/ramdisk/yolov8n_custom.har
dataset:
path: /mnt/ramdisk/Dataset_bird_2/coco_calib2017.tfrecord
calibration_entries: 64
paths:
network_path:
- /mnt/ramdisk/best_bird.onnx
hef_path: /mnt/ramdisk/best_bird.hef
dataset_path: /mnt/ramdisk/Dataset_bird_2 
[6] Последние 50 строк логов компиляции
Логи компиляции не найдены
[7] Проверка доступности файлов
OK: /mnt/ramdisk/best_bird.onnx существует
OK: /home/gpu/hailo_model_zoo/yolov8n_custom.har существует
OK: /mnt/ramdisk/best_bird.yaml существует
[8] Попытка компиляции (с выводом в файл)
hailomz compile --yaml /mnt/ramdisk/best_bird.yaml > ~/hailo_compile_output.log 2>&1
Вывод будет в ~/hailo_compile_output.log
(base) gpu@gpu-serv:/mnt/ramdisk$ ls
avidevelopment_ds      Dataset_bird_2.zip
avidevelopment_ds.zip  ds
best_bird.onnx         empty.py
best_bird.pt           hailo_dataflow_compiler-3.32.0-py3-none-linux_x86_64.whl
best_bird.yaml         onnxruntime_1.py
best_dron.onnx         pycache
best_dron.pt           yolov8n_custom.har
Dataset_bird_2
(base) gpu@gpu-serv:/mnt/ramdisk$
hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$ hailomz compile --yaml /mnt/ramdisk/best_bird.yaml
[info] No GPU chosen, Selected GPU 0
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1757912188.262525 2654954 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1757912188.271005 2654954 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
/home/gpu/.local/lib/python3.10/site-packages/matplotlib/projections/init.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
warnings.warn(“Unable to import Axes3D. This may be due to multiple versions of "
/usr/lib/python3/dist-packages/pythran/tables.py:4520: FutureWarning: In the future np.bool will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, method):
/usr/lib/python3/dist-packages/pythran/tables.py:4553: FutureWarning: In the future np.bytes will be defined as the corresponding NumPy scalar.
obj = getattr(themodule, elem)
Start run for network yolov8n_custom …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n_custom
[info] Restored ONNX model yolov8n_custom (completion time: 00:00:00.08)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.26)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.84)
[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n_custom
[info] Restored ONNX model yolov8n_custom (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.22)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n_custom/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n_custom (completion time: 00:00:00.88)
[info] Translation started on ONNX model yolov8n_custom
[info] Restored ONNX model yolov8n_custom (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.24)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n_custom/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n_custom (completion time: 00:00:00.90)
[info] Appending model script commands to yolov8n_custom from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /home/gpu/hailo_model_zoo/hailo_model_zoo/yolov8n_custom.har
Preparing calibration data…
Traceback (most recent call last):
File “/home/gpu/.local/bin/hailomz”, line 33, in
sys.exit(load_entry_point(‘hailo-model-zoo’, ‘console_scripts’, ‘hailomz’)())
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 248, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/main_driver.py”, line 91, in _ensure_optimized
optimize_model(
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 351, in optimize_model
optimize_full_precision_model(runner, calib_feed_callback, logger, model_script, resize, input_conversion, classes)
File “/home/gpu/hailo_model_zoo/hailo_model_zoo/core/main_utils.py”, line 315, in optimize_full_precision_model
runner.load_model_script(model_script)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_common/states/states.py”, line 16, in wrapped_func
return func(self, *args, **kwargs)
File “/home/gpu/.local/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py”, line 498, in load_model_script
raise InvalidArgumentsException(f"either model script is illegal or file path doesn’t exist: {err_info}”)
hailo_sdk_client.runner.exceptions.InvalidArgumentsException: either model script is illegal or file path doesn’t exist: Model script parsing failed: ‘NoneType’ object has no attribute ‘replace’. Model script file not found in location: None.
(hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$
best_bird.yaml
base:
- base/yolov8.yaml
 
postprocessing:
device_pre_post_layers:
nms: true    # включаем NMS на устройстве
hpp: true      # hardware post-processing
network:
network_name: yolov8n_custom
onnx_model_path: /mnt/ramdisk/best_bird.onnx
input_names: [“images”]
output_names: [“output0”]
parser:
nodes:
- null # стартовый узел — Hailo сам берёт input
 - 
- output0 # конечные узлы — можно оставить просто output0, Hailo пропустит
 
 
compile:
hef_path: /mnt/ramdisk/best_bird.hef
hw_arch: hailo8l
model_script_path: /mnt/ramdisk/yolov8n_custom.har
dataset:
path: /mnt/ramdisk/Dataset_bird_2/coco_calib2017.tfrecord
calibration_entries: 64
paths:
network_path:
- /mnt/ramdisk/best_bird.onnx
hef_path: /mnt/ramdisk/best_bird.hef
dataset_path: /mnt/ramdisk/Dataset_bird_2 
hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$ yolo export model=/mnt/ramdisk/best_bird.pt format=onnx opset=13 simplify=True dynamic=False
Ultralytics 8.3.195  Python-3.10.12 torch-2.1.0+cu118 CPU (Intel Xeon CPU E5-2698 v4 @ 2.20GHz)
Model summary (fused): 72 layers, 3,005,843 parameters, 0 gradients, 8.1 GFLOPs
PyTorch: starting from ‘/mnt/ramdisk/best_bird.pt’ with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (6.0 MB)
ONNX: starting export with onnx 1.16.0 opset 13…
ONNX: slimming with onnxslim 0.1.68…
ONNX: export success 
 1.5s, saved as ‘/mnt/ramdisk/best_bird.onnx’ (11.7 MB)
Export complete (1.8s)
Results saved to /mnt/ramdisk
Predict:         yolo predict task=detect model=/mnt/ramdisk/best_bird.onnx imgsz=640
Validate:        yolo val task=detect model=/mnt/ramdisk/best_bird.onnx imgsz=640 data=dataset_bird.yaml
Visualize:       https://netron.app
 Learn more at Model Export with Ultralytics YOLO - Ultralytics YOLO Docs
(hailo) gpu@gpu-serv:~/hailo_model_zoo/hailo_model_zoo$