Running hailomz optimize reports layer yolov8n/conv41 doesn't have one output layer

Hi all, a newbie to AI and Halio development env, so I have been following the steps from: Tutorial of AI Kit with Raspberry Pi 5 about YOLOv8n object detection | Seeed Studio Wiki
referencing: https://www.youtube.com/watch?v=CEfUmMBuQw8 with step-by-step at GitHub - BetaUtopia/Hailo8l: How to Setup Raspberry Pi 5 with Hailo8l AI Kit using yolov8n on Windows (WSL2 Ubuntu) and GitHub - BetaUtopia/Hailo8l: How to Setup Raspberry Pi 5 with Hailo8l AI Kit using yolov8n on Windows (WSL2 Ubuntu)
The problem I have is at the step to optimize (just before compiling):
hailomz optimize --hw-arch hailo8l --har ./yolov8n.har
–calib-path /home/mytest/.hailomz/data/models_files/coco/2023-08-03/coco_calib2017.tfrecord
–model-script /home/mytest/Hailo8l/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls yolov8n
I get the following error:
line 1040, in add_postprocess_layer_to_hn
raise NMSConfigPostprocessException(f"The layer {encoded_layer.name} doesn’t have one output layer")
hailo_sdk_client.tools.core_postprocess.nms_postprocess.NMSConfigPostprocessException: The layer yolov8n/conv41 doesn’t have one output layer

Note that a deviation from the steps provided for this tutorial I have used the following (to ensure latest version):
pip install hailo_dataflow_compiler-3.29.0-py3-none-linux_x86_64.whl
pip install hailo_model_zoo-2.13.0-py3-none-any.whl
Which seemed to have installed correctly.

Any guidance would be appreciated.

2 Likes

A short update to this. I got everything working and the compiled HEF file runs on my RP5. I discovered that adding the following after the step: hailomz parse --hw-arch hailo8l --ckpt ./best.onnx yolov8n as follows: hailomz optimize yolov8n, and then running: hailomz optimize --hw-arch hailo8l --har ./yolov8n.har… resolved the conv41 not having one output problem. However I don’t know why this resolved the issue. If anyone can comment I would like to know.

2 Likes

Hello,
I am having a similar issue using a custom trained yolov8s model trained on an image set from roboflow. I too get the error of “conv41 doesn’t habe one output layer”. When I try running the hailomz optimize line I get a FileNotFoundError Couldn’t find dataset in /home/user/.hailomz/data/models_files/coco/2021-06-18/coco_calib2017.tfrecord. Please refer to docs/DATA.rst.

Hello,
Ihave followed the same tutorials you gave. I’ve been trying to convert my onnx model to hef for days. But I always get the error conv41 layer error. I also applied your suggestions but it didn’t solve. Can you explain how to convert the model in a detailed way.
Thank you so much for your help

1 Like

I think there is a bug in the code like its line 1040

File: /home/user/hailodfc/lib/python3.10/site-packages/hailo_sdk_client/tools/core_postprocess/nms_postprocess.py
•

Line: 1040, within the method add_postprocess_layer_to_hn().

I haven’t got the time to debug this but a bit of a starting point?

I don’t have much to add as I am still learning this Hailo environment. I am hoping someone from the company will read this and jump in with a suggestion. Basically if you look at my follow-up post I noted that I got it working by adding hailomz optimize yolov8n between the two steps provided in the tutorial I provided the link to. No idea at all why that worked, but it did. Can anyone on the Hailo team jump in here to comment?

Before the problem I posted about, I too had the “file not found” issue. the hallomz could not find the dataset, so I had to provide that as additional parameters as noted in my initial post. Once I did that I then got the issue regarding “conv41 does not have one output” error. As I noted in my follow up post, I got it working by adding the step hailomz optimize yolov8n. I am still very much on a steep learning curve for this Hailo environment, so I can’t explain why this worked for me, but it did.

Hi @gbelair,
I am facing the same issue as you, I tried executing your suggestion however I believe what it is doing is merely overwriting your original .har file (created by parsing your custom trained model) with the default yolov8n model.
For example,
When you run: hailomz parse --hw-arch hailo8l --ckpt ./customv8model.onnx yolov8n, it creates a yolov8n.har file
Supposedly, you are to optimize this yolov8n file which contains the your custom model, but instead, by doing hailomz optimize yolov8n, you create another yolov8n.har file based on the default model which overrides the custom one, and then quantizing this new default model, thus it runs smoothly.

Maybe one way to double check is to see if by running this output .hef file, is it behaving the way your custom model is trained to perform?

Do update me if possible! Thank you

Hi,
The issue stems from the fact that your custom yolov8n model must have different end nodes (right before the NMS) than the original yolov8n model in the Hailo model zoo. You may have passed one of these to the parser whereas there should be 6 of them.
These end modes are supplied to the parser so it can cut the model right before the NMS processing. Later on, the optimizer will “attach” the NMS post-processing to the parser model. The NMS post-processing doesn’t necessarily run on the Hailo-8 and might run on the host processor, depending on the model. For yolov8, it will run on the host processor.

How do you find the name of the end nodes of any model ?

You can use the CLI command

hailo parser onnx -y

TIP: if you need help with what arguments the command accepts, try to run both “hailo parser --help” and “hailo parser onnx --help”

For example, if I apply this command to the yolov8n.onnx downloaded from the model zoo: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.13.0/hailo8/yolov8n.hef
You’ll find these lines amongst the output text displayed:

[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.

Now that you have the names of the end nodes, how do you pass them to the “hailomz parse” command ?

There are two options:

  1. Directly as arguments to the “hailomz parse” command. If you run “hailomz parse --help”, you’ll see that it accepts --start-node-names and --end-node-names as arguments.
    So, if I apply the end node names found earlier to this command, it would look like this:

hailomz parse yolov8n --end-node-names /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv

You may be wondering now, if you don’t pass any end-node-names argument to “hailomz parse”, what node names would it use ? In this case, it will use the end node names written in the default yaml file corresponding to yolo8n. This lead to the second option of how you can pass the end node names to the parser.

  1. When you invoke “hailomz parse yolov8n”, it will use meta-information stored in the file yolov8n.yaml file: hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov8n.yaml at master · hailo-ai/hailo_model_zoo · GitHub .
    You will notice both the start and the end nodes listed there:

parser:
nodes:

  • null
    • /model.22/cv2.0/cv2.0.2/Conv
    • /model.22/cv3.0/cv3.0.2/Conv
    • /model.22/cv2.1/cv2.1.2/Conv
    • /model.22/cv3.1/cv3.1.2/Conv
    • /model.22/cv2.2/cv2.2.2/Conv
    • /model.22/cv3.2/cv3.2.2/Conv

Therefore, one way to specify new end nodes is to make a copy of this yaml file, modify the end nodes field only and pass this new yaml file as argument to “hailomz parse yolov8n” as follow:

hailomz parse yolov8n --yaml <PATH_TO_NEW_YAML_FILE>

Hopefully this resolves your issue. You should delete all the yolob8n HAR files that were created and run the correct sequence.
Note that running “hailomz optimize yolov8n” didn’t cause any errors because it used the default yolov8n model, not your custom trained model.

Last note, for conveniency, Hailo provides a yolov8 retraining docker as well: hailo_model_zoo/training/yolov8 at master ¡ hailo-ai/hailo_model_zoo ¡ GitHub
You don’t have to use it but the README.rst can provide useful information as well.

Updates:

  • Removing the ‘opset’ arg from the yolo export command matches the onnx versions between the to models, but doesn’t correct the error.
  • Also tried other flags in yolo export (dynamic / simplify / batch) and those either broke the hailo parser stage or no change to the output layer error.
  • Exporting the base yolov8n model from the yolo tool chain: “yolo export model=yolov8n imgsz=640 format=onnx” into Hailomz parse / optimize works. Exactly same export arguments but with a custom trained model does not.

Hi all,

I encountered the same error. I’m running the Hailo suite inside a Docker container and used opset=9 to export the ONNX model. I then re-ran the script to generate the TFRecord files for both the validation and calibration datasets with:

python hailo_model_zoo/hailo_model_zoo/datasets/create_coco_tfrecord.py val2017

and

python hailo_model_zoo/hailo_model_zoo/datasets/create_coco_tfrecord.py calib2017

This placed the COCO TFRecord files in PATH-TO/.hailomz/coco/val2017/. After that, I was able to optimize and compile the model with:

hailomz compile yolov8n --ckpt=best.onnx --hw-arch hailo8l --calib-path PATH-TO/.hailomz/coco/val2017/ --classes 3

(Adjust PATH-TO, your .hailomz, number of classes, and other arguments as needed).
I hope it helps.

I’m another newbie trying to get this workflow running and having the same issues as previous posters. I debugged both the working flow (default model) and broken flow with the customized model.

Line 1039 in nms_postrprocess.py:
In the working flow the successor.op test passes (= <LayerType.output_layer: ‘output_layer’>).

In the failing flow the successor.op test failes (= <LayerType.conv: ‘conv’>).

In netron.app looking at the onnx files for each model, I see the ‘imports’ value is different (ai.onnx v19 vs. ai.onnx v11) and model itself has a different flow.

Therefore the onnx files do not have the same structure. I don’t think the optimize phase is where the root issue lies. It’s likely in some subtle difference in onnx file formats that’s mishandled by the parser. I’m no expert in this but will keep poking around to see if I can find

I’m glad to see it’s not just me who’s having issues with converting YOLO models from ONNX to .hef. I’d be grateful if someone could share step by step instrux if you’ve figured out a workaround. Tx!

Couldn’t the error be caused by training with the newest version of Ultralytics?

Hi,
This might help someone.
Problem: When I train a model on colab and export from there or from my windows pc. that onnx file I can not use. (The error is mismatch in opset and IR version).

Temp Solution: So, I always convert yolo models to onnx on the same pc(ubuntu) where I compile.

I use single command for parsing, optimize, compile like this…

hailomz compile --ckpt mymodel.onnx --yaml /path/to/yaml --calib-path /path/to/images/ --hw-arch hailo8l
1 Like

Thank you victorc for detailed workflow description. From victorc’s post, I believe the workflow issue is passing a model name (aka yolov8n) to hailomz is actually telling hailomz to use default yaml files in the model zoo. As per Netron inspection, customized yolo models do not have the same graph as the default model, therefore the tool is trying to parse the customized model using end nodes defined in the yolov8n.yaml that is not correct.

So to use a custom model (again following from victorc’s post), you need to run the parse to expose the end nodes, then copy those into a customized yaml file that you use with both the parse and optimize commands. They should look something like this:

hailomz parse --yaml myyam.yaml --ckpt mymodel.onnx --hw-arch hailo8l

Note ‘yolov8n’ is not passed. Use the same yaml file (with end nodes specificed) for both the parse and optimize commands.

Unfortunately this workflow still doesn’t work, and generates a new error that I have not yet sorted out:

hailo_sdk_common.hailo_nn.exceptions.HailoNNException: The layer named yolov8n/conv41 doesn't exist in the HN

Will keep poking at it.

@saurabh I had the same issue and found it can be solved in two ways: downgrade the onnx version in your yolo environment to 1.13.1 (changes the onnx model version from v10 to v5 as per Netron inspection); or upgrade the onnx version in your hailo environment to latest. Pip will complain that the version is incompatible with hailo, but it still seems to run fine.

1 Like

@steve1, I believe you still need to add “yolov8n” to the command line. Since you are passing your own yaml file as argument, it will look for that one, not the default one.

Thanks for this. I gave it a try.
As a starting point a re-exported the onnx file (ultralytics env):
yolo export model=./best.pt imgsz=640 format=onnx opset=11

which created:
85597 -rw-r–r-- 1 mytest mytest 11M Oct 23 13:03 best.onnx

Then I switched to hailo env and ran:
hailo parser onnx -y best.onnx

which created:
1671 -rw-r–r-- 1 mytest mytest 11M Oct 23 13:25 best.har
with:
In order to use HailoRT post-processing capabilities, these end node names should be used:
/model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.0/cv2.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv3.2/cv3.2.2/Conv /model.22/cv2.2/cv2.2.2/Conv

So then I ran:
hailomz parse --hw-arch hailo8l --ckpt best.onnx yolov8n --end-node-names /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.0/cv2.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv3.2/cv3.2.2/Conv /model.22/cv2.2/cv2.2.2/Conv

which created:
135655 -rw-r–r-- 1 mytest mytest 11M Oct 23 13:46 yolov8n.har

Finally I ran:
hailomz optimize --hw-arch hailo8l --har ./yolov8n.har --calib-path /home/mytest/.hailomz/data/models_files/coco/2023-08-03/coco_calib2017.tfrecord
–model-script /home/mytest/Hailo8l/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
yolov8n

But still got the same error:
NMSConfigPostprocessException: The layer yolov8n/conv41 doesn’t have one output layer

Have I understood your steps to correct properly?
cheers

Hi,
This might help someone.
Replace model=yolov8s.pt with model=yolov8s.yaml

example
yolo detect train data=coco128.yaml model=yolov8s.yaml epochs=100 imgsz=640 batch=16 device=0 name=retrain_yolov8s_new

Update on this: It seems there is another thread exploring this same problem. Optimization conv41 layer error - General - Hailo Community. However following recommendations there have not resolved this problem for me. Just letting others know so they can review as well.