Convert Ultralytics yolov8n.pt to yolov8n.hef fails

Greetings!

I’m new to the HAILO 8L processor for the Pi 5 - The issue I’m having is converting a yolov8n.pt [Ultralytics] YOLO model to yolov8n.hef format. I have my own custom Yolo11n model I eventually want to convert - tried and failed miserably - so I figured simply take the standard Ultralytics Yolov8n model and try converting it to .hef as a test [should be easy right?].

I’m running a x86_64 docker environment Ubuntu 22.04 no GPU support - I’ve tested this on an Intel Mac and a M4 Mac [using rosetta] and obtained the exact same results.

Using ultralytics v8.3.228, I converted the Ultralytics yolov8n.pt to yolov8n.onnx as follows…

**yolo export model=yolov8n.pt imgsz=640 opset=11 format=onnx**

Note: I have tried several opset settings including omitting it altogether - no difference

INow on the docker image…

nstalled: Python 3.10

		**hailo_model_zoo-2.17.1-py3-none-any.whl**

		**hailo_dataflow_compiler-3.33.0-py3-none-linux_x86_64.whl**

		NO ERRORS DURING INSTALLATION

hailomz --version

**Hailo Model Zoo v2.17.1**

I place an “images” folder in my home [working directory] this folder contains 800 .jpg training images 1280x720. For calibration i’m assuming it won’t matter that it’s not the COCO set since eventually i want to use this with my own model conversion.

The resulting yolov8n.onnx is placed in my home folder, let’s compile…

**hailomz compile yolov8n --hw-arch hailo8l --ckpt ./yolov8n.onnx --calib-path ./images --classes 80**

It fails with this:

[info] No GPU chosen and no suitable GPU found, falling back to CPU.

[info] Saved HAR to: /root/home/yolov8n.har

[info] Loading model script commands to yolov8n from /usr/local/lib/python3.10/dist-packages/hailo_model_zoo/cfg/alls/generic/yolov8n.alls

[info] To achieve optimal performance, set the compiler_optimization_level to “max” by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.

##################

[error] Failed to produce compiled graph

[error] TypeError: expected str, bytes or os.PathLike object, not NoneType

##################

I have at least a dozen variations and they all end with this error message. Any idea what I’m doing wrong in converting a standard off-the-shelf Ultralytics model to convert? I figure once this works I may have better luck converting my own model.

Any words of wisdom would be appreciated.

Welcome to the Hailo Community!

I recommend you first walk trough the tutorials built into the Hailo AI Software Suite. You can start the Jupyter Notebook server with notebooks for each step of the conversion using the following command.

hailo tutorial

We do not validate this hardware for our software. You need to run the Hailo AI Software suite on a x86 machine running Ubuntu 22.04 or 24.04.

I appreciate the response.

Let me shed some light on my approaches thus far…

  1. I tried the examples with no luck - include the suggested Docker configuration.
  2. I mentioned that i used a x86_64 Mac machine (intel) as well as a M4 which gave me the same results using the docker configuration found on the example website. Both are running the x86_64 Docker container with Linux Ubuntu 22.04 (Intel is native and ARM is using Rosetta 2).

The only difference i could see is the fact I’m using CPU only and not a GPU - the code seems to recognize this and continues which gives me the impression the GPU is not required - tho i understand the computational time will be greatly extended with the CPU - this isn’t a big deal - the models are not that large.

The error mentioned has been seen by many on here I’m hoping one of the folks who had this error could describe the solution [if any].

As a side note: I have converted models for the Intel Movidius and Coral TPUs using these systems with no issue.

Hi. I succeded on an x86 CPU Ubuntu computer (old and slow) after following Computer Visions Engineers youtube, https://youtu.be/pYkSG6BmyjU?si=P1HRJ--bj2jRrnlB

Thanks Jorgen, I also was successful using my Mac M4 machine with OrbStack and UTM [takes 3x longer than OrbStack but does work].

I prefer OrbStack which is much faster however you MUST turn off Rosetta 2 emulation or it will fail with AVX errors [Settings → System → Compatibility → Use Rosetta to run Intel code = OFF].

Using OrbStack running Ubuntu 22.04 x86_64 I converted the 640x640 Ultralytics Yolov8n model in 03:08:26, the Yolov11n in 03:12:39 and a 320x320 Yolov11n in 00:45:31. For me, since I don’t convert models often - only for deployment these times are reasonable. YMMV.

Just a FYI for the Mac Mx folks out there - it is possible!

Quick follow-up on my M4 converting ONNX models via OrbStack to HEF,
the timing above is outdated - I may have had a setting incorrect. I’ve been converting custom models based on Yolov11 Nano and Small. I’m running on a 64GB RAM M4 Mac Mini allocating 32 GB to the OrbStack Ubuntu Plucky AMD64 OS.

yolov11n 640x60 = 1 hour
yolov11s 640x640 = < 2 hours
yolov11n 320x320 = 45 mins

1 Like

You can’t go from ONNX to HEF directly. You have to quantize the ONNX file and that yields an HAL file. Then you can compile the HAL file to yield the desired HEF file.

I got help from Microsoft CoPilot and that helped me crack the required sequence.