[error] Failed to produce compiled graph [error] TypeError: expected str, bytes or os.PathLike object, not NoneType

Hi,

I am writing a google colab notebook and running Hailo DFC ( hailo_dataflow_compiler-3.33.0-py3-none-linux_x86_64.whl) with ubuntu 22 and python3.10 in colab. I have following error when running hailomz compile.

command:
!hailomz compile --ckpt /content/best.onnx --calib-path /content/calib --yaml /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/networks/yolov8n.yaml --classes 52 --hw-arch hailo8

output:
Start run for network yolov8n …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.07)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.27)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:00.83)
[warning] ONNX shape inference failed: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:46 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::__cxx11::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const std::string&, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 22 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 21.

[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.04)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.23)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.21)
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.04)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.27)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.97)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /content/yolov8n.har
Preparing calibration data…
[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
[info] Found model with 3 input channels, using real RGB images for calibration instead of sampling random data.
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:00.55)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100% 64/64 [00:31<00:00, 2.01entries/s]
[info] Model Optimization Algorithm Statistics Collector is done (completion time is 00:00:33.27)
[info] Starting Fix zp_comp Encoding
[info] Model Optimization Algorithm Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Starting MatmulDecomposeFix
[info] Model Optimization Algorithm MatmulDecomposeFix is done (completion time is 00:00:00.00)
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /content/yolov8n.har
[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] To achieve optimal performance, set the compiler_optimization_level to “max” by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[error] Failed to produce compiled graph
[error] TypeError: expected str, bytes or os.PathLike object, not NoneType

These errors appear. I don’t know the reason.

when I run the same command locally in my computer with a 1050TI GPU then the output is following.

(convert_to_hailo) dharam@DESKTOP-4L30DS0:~$ hailomz compile --ckpt data/hailo_docker/shared_with_docker/doc/best.onnx --calib-path data/hailo_d
ocker/shared_with_docker/doc/calib/ --yaml anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/cfg/networks/yolov8n.ya
ml --classes 52 --hw-arch hailo8

[info] No GPU chosen, Selected GPU 0
Start run for network yolov8n …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.11)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.34)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:01.01)
Traceback (most recent call last):
File “/home/dharam/anaconda3/envs/convert_to_hailo/bin/hailomz”, line 7, in
sys.exit(main())
File “/home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/main.py”, line 122, in main
run(args)
File “/home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/main.py”, line 111, in run
return handlersargs.command
File “/home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/main_driver.py”, line 248, in compile
_ensure_optimized(runner, logger, args, network_info)
File “/home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/main_driver.py”, line 73, in _ensure_optimized
_ensure_parsed(runner, logger, network_info, args)
File “/home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/main_driver.py”, line 108, in _ensure_parsed
parse_model(runner, network_info, ckpt_path=args.ckpt_path, results_dir=args.results_dir, logger=logger)
File “/home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/core/main_utils.py”, line 126, in parse_model
raise Exception(f"Encountered error during parsing: {err}") from None
Exception: Encountered error during parsing: ‘/Cast_5_output_0_value’

Please help.

This does not look right. The YOLOv8n model has six end-nodes, not just one. If you are using your own YOLOv8n model, I recommend using the regular model conversion flow instead of the Model Zoo flow.

If you haven’t done so already, please work through the tutorials included in the Hailo AI Software Suite. Call the following command to start a Jupyter Notebook server with notebook for each step of the workflow.

hailo tutorial

Review your model in Netron and compare it to the Yolov8n in the Model Zoo. Identify the equivalent end-nodes and parse your model using those. This will allow you to use the HailoRT post-processing (NMS).

You can obtain a YOLOv8n HAR file from the Model Zoo by running:

hailomz parse xolov8n

You can open this file in Netron as well.

Hi @KlausK ,

There is some good news.

I re-exported ONNX model from my .pt model using newer versions of Ultralytics 8.3.225 and Python-3.12.12 and torch-2.8.0+cu126 CPU in google colab.

and then downloaded this to my own PC and ran the compile command and it works on my PC:

(convert_to_hailo) dharam@DESKTOP-4L30DS0:~$ hailomz compile --ckpt data/hailo_docker/shared_with_docker/doc/best3.onnx --calib-path data/hailo_docker/share
d_with_docker/doc/calib/ --yaml anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/cfg/networks/yolov8n.yaml --classes 52 --hw-ar
ch hailo8
[info] No GPU chosen, Selected GPU 0
Start run for network yolov8n …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.10)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.34)
[info] Simplified ONNX model for a parsing retry attempt (completion time: -1:59:58.32)
[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.03)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.21)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.76)
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.04)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.22)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.76)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /home/dharam/yolov8n.har
Preparing calibration data…
[info] Loading model script commands to yolov8n from /home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
[info] Found model with 3 input channels, using real RGB images for calibration instead of sampling random data.
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:00.44)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:28<00:00, 2.22entries/s]
[info] Model Optimization Algorithm Statistics Collector is done (completion time is 00:00:30.10)
[info] Starting Fix zp_comp Encoding
[info] Model Optimization Algorithm Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Starting MatmulDecomposeFix
[info] Model Optimization Algorithm MatmulDecomposeFix is done (completion time is 00:00:00.00)
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /home/dharam/yolov8n.har
[info] Loading model script commands to yolov8n from /home/dharam/anaconda3/envs/convert_to_hailo/lib/python3.10/site-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] To achieve optimal performance, set the compiler_optimization_level to “max” by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[info] Adding an output layer after conv41
[info] Adding an output layer after conv42
[info] Adding an output layer after conv52
[info] Adding an output layer after conv53
[info] Adding an output layer after conv62
[info] Adding an output layer after conv63
[info] Building optimization options for network layers…
[info] Successfully built optimization options - 3s 547ms
[info] Trying to compile the network in a single context
[info] Using Single-context flow
[info] Resources optimization params: max_control_utilization=75%, max_compute_utilization=75%, max_compute_16bit_utilization=75%, max_memory_utilization (weights)=75%, max_input_aligner_utilization=75%, max_apu_utilization=75%
[info] Validating layers feasibility

Validating yolov8n_context_0 layer by layer (100%)






● Finished

[info] Layers feasibility validated successfully
[info] Running resources allocation (mapping) flow, time per context: 59m 59s
Context:0/0 Iteration 4: Trying parallel mapping…
cluster_0 cluster_1 cluster_2 cluster_3 cluster_4 cluster_5 cluster_6 cluster_7 prepost
worker0 * * * * * * * * V
worker1 V V V V V V V V V
worker2 V V V V V V V V V
worker3 V V V V V V V V V

00:05
Reverts on cluster mapping: 0
Reverts on inter-cluster connectivity: 0
Reverts on pre-mapping validation: 0
Reverts on split failed: 0

[info] Iterations: 4
Reverts on cluster mapping: 0
Reverts on inter-cluster connectivity: 1
Reverts on pre-mapping validation: 0
Reverts on split failed: 0
[info] ±----------±--------------------±--------------------±-------------------+
[info] | Cluster | Control Utilization | Compute Utilization | Memory Utilization |
[info] ±----------±--------------------±--------------------±-------------------+
[info] | cluster_0 | 68.8% | 45.3% | 19.5% |
[info] | cluster_1 | 100% | 43.8% | 35.2% |
[info] | cluster_2 | 75% | 43.8% | 37.5% |
[info] | cluster_3 | 100% | 79.7% | 20.3% |
[info] | cluster_4 | 81.3% | 37.5% | 21.9% |
[info] | cluster_5 | 81.3% | 51.6% | 23.4% |
[info] | cluster_6 | 43.8% | 15.6% | 39.8% |
[info] | cluster_7 | 50% | 28.1% | 23.4% |
[info] ±----------±--------------------±--------------------±-------------------+
[info] | Total | 75% | 43.2% | 27.6% |
[info] ±----------±--------------------±--------------------±-------------------+
[info] Successful Mapping (allocation time: 23s)
[info] Compiling kernels of yolov8n_context_0…
[info] Bandwidth of model inputs: 9.375 Mbps, outputs: 7.43408 Mbps (for a single frame)
[info] Bandwidth of DDR buffers: 0.0 Mbps (for a single frame)
[info] Bandwidth of inter context tensors: 0.0 Mbps (for a single frame)
[info] Building HEF…
[info] Successful Compilation (compilation time: 10s)
[info] Saved HAR to: /home/dharam/yolov8n.har
HEF file written to yolov8n.hef

**I do not know the reason!
**

HI @KlausK ,

There is bad news as well.

I tried to run same setup in google colab and it does not works.
I custum trained yolo model and checked with netron.app and one input and one end nodes.

Start run for network yolov8n …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.14)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.66)
[info] Simplified ONNX model for a parsing retry attempt (completion time: 00:00:01.88)
[info] According to recommendations, retrying parsing with end node names: [‘/model.22/Concat_3’].
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.07)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.39)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/Concat_3’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.56)
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.38)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.23)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /content/yolov8n.har
Preparing calibration data…
[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
[info] Found model with 3 input channels, using real RGB images for calibration instead of sampling random data.
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:00.67)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100% 64/64 [01:11<00:00, 1.11s/entries]
[info] Model Optimization Algorithm Statistics Collector is done (completion time is 00:01:12.92)
[info] Starting Fix zp_comp Encoding
[info] Model Optimization Algorithm Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Starting MatmulDecomposeFix
[info] Model Optimization Algorithm MatmulDecomposeFix is done (completion time is 00:00:00.00)
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /content/yolov8n.har
[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] To achieve optimal performance, set the compiler_optimization_level to “max” by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[error] Failed to produce compiled graph
[error] TypeError: expected str, bytes or os.PathLike object, not NoneType

Here is link to my google colab notebook.

Hi @KlausK , @omria ;

I ran this command in colab and there is still errror

!hailomz compile --ckpt /content/best.onnx --calib-path /content/calib --yaml /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/networks/yolov8n.yaml --classes 52 --hw-arch hailo8 --end-node-names /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv

In file included from /usr/local/lib/python3.10/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1929,
from /usr/local/lib/python3.10/dist-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /usr/local/lib/python3.10/dist-packages/numpy/core/include/numpy/arrayobject.h:5,
from /root/.pyxbld/temp.linux-x86_64-cpython-310/usr/local/lib/python3.10/dist-packages/hailo_model_zoo/core/postprocessing/cython_utils/cython_nms.c:1144:
/usr/local/lib/python3.10/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " “#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION” [-Wcpp]
17 | #warning "Using deprecated NumPy API, disable it with " \
| ^~~~~~~
Start run for network yolov8n …
Initializing the hailo8 runner…
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.07)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.35)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: ‘images’: ‘yolov8n/input_layer1’.
[info] End nodes mapped from original model: ‘/model.22/cv2.0/cv2.0.2/Conv’, ‘/model.22/cv3.0/cv3.0.2/Conv’, ‘/model.22/cv2.1/cv2.1.2/Conv’, ‘/model.22/cv3.1/cv3.1.2/Conv’, ‘/model.22/cv2.2/cv2.2.2/Conv’, ‘/model.22/cv3.2/cv3.2.2/Conv’.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:01.05)
[info] Appending model script commands to yolov8n from string
[info] Added nms postprocess command to model script.
[info] Saved HAR to: /content/yolov8n.har
Preparing calibration data…
[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
[info] Found model with 3 input channels, using real RGB images for calibration instead of sampling random data.
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s no available GPU
[warning] Running model optimization with zero level of optimization is not recommended for production use and might lead to suboptimal accuracy results
[info] Model received quantization params from the hn
[info] MatmulDecompose skipped
[info] Starting Mixed Precision
[info] Model Optimization Algorithm Mixed Precision is done (completion time is 00:00:00.55)
[info] LayerNorm Decomposition skipped
[info] Starting Statistics Collector
[info] Using dataset with 64 entries for calibration
Calibration: 100% 64/64 [00:32<00:00, 1.94entries/s]
[info] Model Optimization Algorithm Statistics Collector is done (completion time is 00:00:34.34)
[info] Starting Fix zp_comp Encoding
[info] Model Optimization Algorithm Fix zp_comp Encoding is done (completion time is 00:00:00.00)
[info] Matmul Equalization skipped
[info] Starting MatmulDecomposeFix
[info] Model Optimization Algorithm MatmulDecomposeFix is done (completion time is 00:00:00.00)
[info] Finetune encoding skipped
[info] Bias Correction skipped
[info] Adaround skipped
[info] Quantization-Aware Fine-Tuning skipped
[info] Layer Noise Analysis skipped
[info] Model Optimization is done
[info] Saved HAR to: /content/yolov8n.har
[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] To achieve optimal performance, set the compiler_optimization_level to “max” by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[error] Failed to produce compiled graph
[error] TypeError: expected str, bytes or os.PathLike object, not NoneType

Please help . This is link to reproducible google colab Google Colab

Thank you.

Hey @Dharmendra_Sharma,

Quick tip first: Since you’re working with a custom ONNX model, I’d recommend using the Dataflow Compiler (DFC) directly rather than the Model Zoo. The Model Zoo works great for our ready-made ONNX models, but the DFC gives you more flexibility when working with custom models that differ from our standard implementations.

About your error:

Your conversion is progressing nicely through parsing and optimization, but it’s hitting a snag during compilation with this message:

[error] Failed to produce compiled graph
[error] TypeError: expected str, bytes or os.PathLike object, not NoneType

This typically points to an environment configuration issue or a missing/incorrect path. Here are some troubleshooting steps that have helped others with similar issues:

1. Set up a Python virtual environment
Running the DFC outside a virtual environment can cause path and dependency conflicts. Try creating a fresh environment:

python -m venv .venv
source .venv/bin/activate
# Reinstall DFC and HailoRT wheels in this environment

Then run your conversion again from within this environment.

2. Verify all your file paths
Double-check that your paths for --ckpt, --calib-path, and --yaml are correct and the files are accessible.

3. Check your calibration images
Make sure your calibration directory (/content/calib) contains valid images in the correct format and size. Empty directories or corrupted files can cause the process to fail.

4. Review your YAML configuration
If you’re using a custom class count, ensure your YAML reflects this (e.g., output_shape: 52x5x100 for 52 classes). Also verify that network_path points to your ONNX file, and leave url empty if you’re not using it.

5. Start fresh
Sometimes leftover files from previous runs can interfere. Try removing any .har or temporary files and running the complete pipeline again from scratch.

Additional context:
This error can also occur if the model parsing step fails silently, which might be due to an unsupported ONNX model structure, missing layers, or incorrect file paths.

Give these a try and let me know if you’re still running into issues!

Related discussions you might find helpful:

Hi @omria ,

I did all the steps as you wrote (even modified the number of classes in .yaml).

I have same error still.

I have reproducible colab link here. Google Colab

Can you please have a look of this?

Best regards,

Hey @Dharmendra_Sharma,

I took a look at your Google Colab notebook, and I think I found the main issue. You’re mixing up MZ and DFC in your workflow. The correct approach is to run the parse and optimization steps using MZ first, then move on to the compilation using DFC.

I’ve updated the notebook to run MZ properly, so it should work now!

Here’s the compilation command you’ll need:

hailomz compile yolov8n \
  --har /content/yolov8n.har \
  --hw-arch hailo8

Hi @omria,

I still faced the same issue. Google Colab

Best regards,

Dharmendra