Unable to compile Quantized HAR file to HEF for YOLOv8n model

Hi

I trained a YOLOv8 nano model in the Ultralytics Hub and exported as ONNX. I have followed the Hailo DCF User Guide using Jupyter Notebook and have managed to generate an optimized/quantized .HAR file. Unfortunatley at the final step of compiling the HEF file, I’m coming unstuck. I’ve tried both the hailomz command in the Docker terminal, and the below script from the DCF Guide (in Notebook), but no luck.

runner.load_model_script(alls)

hef = runner.compile()
file_name = f"{model_name}.hef"
with open(file_name, “wb”) as f:
f.write(hef)

My only deviation from the tutorial throughout the process was amending the output_height and output_width from 224 to 640. Here are portions of the error message I’m getting:

[error] Mapping Failed (allocation time: 57s)

No successful assignment for: format_conversion1, concat17, feature_splitter9, shortcut_softmax1, reduce_max_softmax1, ew_sub_softmax1, reduce_sum_softmax1, ew_mult_softmax1, conv64

"[error] Failed to produce compiled graph

BackendAllocatorException: Compilation failed: No successful assignment for: format_conversion1, concat17, feature_splitter9, shortcut_softmax1, reduce_max_softmax1, ew_sub_softmax1, reduce_sum_softmax1, ew_mult_softmax1, conv64"

I’m doing this as a hobby project and am not a technical person; I’ve had to make extensive use of ChatGPT to get this far. I would appreciate any guidance. Please let me know if you require me to share more detail around the error.

1 Like

Hi @AlanAtkinson,
You would need to also set different end-node-names. Specifically, the NMS part of the network is applied afterwards on the model. You can take a look in the way the model-zoo prepares a YOLOv8 network.

In general, you would get a smoother ride if you use Hailo’s retrain dockers as a starting point for models that are supported there.

Hi Nadav

Thanks for your response.

I have trained the YOLOv8n model on the ‘african-wildlife’ dataset provided by Ultralytics. Can you perhaps confirm if it’s actually possible to parse the ONNX to HAR and then compile to HEF using the docker version of hailo software suite? Or is this not the case?

And if it is possible, how do I source or create a configuration YAML file to be used in the command below?

hailomz compile --ckpt yolov8s.onnx --calib-path /path/to/calibration/imgs/dir/ --yaml path/to/yolov8s.yaml --start-node-names name1 name2 --end-node-names name1 --classes 80

Definitely

You will need to identify the last conv layers, should be 6, and put their names in the yaml. I suggest to use the yaml instead of concatanating it as arguments to the command.

The african-wildlife has only 4 classes, so you need to adapt this as well.

Hi again Nadav

When I try and extract an .alls file from my HAR file, I don’t get an error message, but the alls file never generates. Any idea what’s happening?

$ [info] Current Time: 23:18:17, 08/21/24

[info] CPU: Architecture: x86_64, Model: Intel(R) Core™ i5-5250U CPU @ 1.60GHz, Number Of Cores: 4, Utilization: 1.5%

[info] Memory: Total: 3GB, Available: 2GB

[info] System info: OS: Linux, Kernel: 6.4.16-linuxkit

[info] Hailo DFC Version: 3.28.0

[info] HailoRT Version: 4.18.0

[info] PCIe: No Hailo PCIe device was found

[info] Running hailo har extract /local/workspace/hailo_model_zoo/ss_nano_11_hailo_model.har --auto-model-script-path /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/africa.alls
$

I replaced the two end-nodes that were presented to me in the error message at the parsing step, viz. /model.22/Sigmoid and /model.22/dfl/Reshape, with the six end-nodes from the YAML file, and that seems to have done the job. HEF file has successfully generated. Now to deploy to the Pi unit for testing.

runner = ClientRunner(hw_arch=chosen_hw_arch)
hn, npz = runner.translate_onnx_model(
onnx_path,
onnx_model_name,
start_node_names=[“images”],
end_node_names=[“/model.22/cv2.2/cv2.2.2/Conv”, “/model.22/cv3.2/cv3.2.2/Conv”,“/model.22/cv2.1/cv2.1.2/Conv”,“/model.22/cv3.1/cv3.1.2/Conv”,“/model.22/cv2.0/cv2.0.2/Conv”,“/model.22/cv3.0/cv3.0.2/Conv”],
net_input_shapes={“images”: [1, 3, 640, 640]},
)

If it’s not the compiled model har, that alls file is not created yet.

Hi Nadav

I have deployed my YOLOv8n animal detection HEF file to my Pi unit and I have successfully run the Detection example from this link:

How do I now run my own HEF model? What documentation should I follow?

I think that the next step is use the detection from the basic pipelines that you’ve shared.