Optimization conv41 layer error

Hello hailo community.
I want to deploy a custom yolov8 model to hailo. I have trained a model using ultrlytics and after that I converted this model to onnx format. After that to convert my model to hef format I followed the steps.
first I made parsing using:
hailomz parse yolov8n --ckpt /home/usr/hailo-models/hailo-n178.onnx --hw-arch hailo8l --start-node-names 'images' --end-node-names '/model.22/cv2.0/cv2.0.2/Conv' '/model.22/cv3.0/cv3.0.2/Conv' '/model.22/cv2.1/cv2.1.2/Conv' '/model.22/cv3.1/cv3.1.2/Conv' '/model.22/cv2.2/cv2.2.2/Conv' '/model.22/cv3.2/cv3.2.2/Conv'

and this succesfully saved a har file. After that ı proceed with the optimization step. I used:
hailomz optimize yolov8n --har yolov8n.har --calib-path /home/usr/hailodfc/data.tfrecord --classes 1

But I’m getting
hailo_sdk_client.tools.core_postprocess.nms_postprocess.NMSConfigPostprocessException: The layer yolov8n/conv41 doesn't have one output layer

this error. I looked every related forum questions but I couldn’t find a solution. Please can you help me?

Hey @yurdakul21,

Welcome to the Hailo Community!

The error you’re encountering:

hailo_sdk_client.tools.core_postprocess.nms_postprocess.NMSConfigPostprocessException: The layer yolov8n/conv41 doesn't have one output layer

suggests that during the parsing or optimization process, the Hailo SDK is expecting the conv41 layer to have a single output, but it seems to be producing multiple outputs. Here’s how you can fix this issue:

1. Check the ONNX Model Output Layers

First, you need to confirm that the output layers of your YOLOv8 model are correct and match Hailo’s expectations.

  • Visualize the ONNX Model: Use Netron (https://netron.app/) to load and inspect the hailo-n178.onnx model. Focus on the layers around conv41 to see how many outputs it produces.

  • The YOLOv8 model might have multiple output heads (for bounding boxes, objectness scores, and class probabilities). You need to identify the correct output layers that produce the final bounding box predictions and class probabilities.

2. Modify --end-node-names in hailomz parse

When running the hailomz parse command, you are currently specifying multiple end-node-names. These need to correspond to the final layers of your model that produce the predictions, such as bounding boxes and classification scores. If conv41 is not one of these, you may need to adjust the layers you’re specifying.

Here’s the original command:

hailomz parse yolov8n --ckpt /home/usr/hailo-models/hailo-n178.onnx --hw-arch hailo8l --start-node-names 'images' --end-node-names '/model.22/cv2.0/cv2.0.2/Conv' '/model.22/cv3.0/cv3.0.2/Conv' '/model.22/cv2.1/cv2.1.2/Conv' '/model.22/cv3.1/cv3.1.2/Conv' '/model.22/cv2.2/cv2.2.2/Conv' '/model.22/cv3.2/cv3.2.2/Conv'

You may need to simplify and ensure that only the final output layers for bounding box regression and classification scores are specified. For example:

hailomz parse yolov8n --ckpt /home/usr/hailo-models/hailo-n178.onnx --hw-arch hailo8l --start-node-names 'images' --end-node-names 'output_boxes_layer' 'output_scores_layer'

Make sure to replace 'output_boxes_layer' and 'output_scores_layer' with the actual layer names from your ONNX model that correspond to the bounding box and class outputs.

3. Verify the Layers You Chose

To ensure the correct layers are used:

  • Bounding Box Layer: Should output bounding box coordinates (e.g., [x, y, w, h]).
  • Class Scores Layer: Should output objectness scores or class probabilities.

If you’re not sure which layers these are, refer to the ONNX model visualization in Netron.

4. Simplify the Outputs if Necessary

If you’re dealing with too many output layers in the parsing step, try focusing on just the necessary outputs (bounding boxes and class scores). Specifying too many outputs can confuse the Hailo SDK and cause errors during the NMS step.

Regards,
Omri

When i inspect the model i cannot see a layer named conv41

I have reviewed many solutions on this forum. some suggest the problem is caused by having 1 class. would that be the problem

Quick update on this problem. I have controlled the default pretrained yolov8n model and compared with my model. I came across with my model image size is 800. The error caused by this little mistake. Thank you for your suggestions @omria . Now I’m getting
NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8n/conv53 due to unsupported required slope.

Hey @yurdakul21,

Thanks for the update! It’s great to hear you identified the issue with the image size. Now, regarding the new error:

Error: NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8n/conv53 due to unsupported required slope

This issue is often related to the use of certain activation functions like Leaky ReLU, which has a slope parameter that the Hailo hardware or SDK doesn’t support during quantization. Here’s how you can address this:

Solutions:

  1. Check the Activation Function in conv53:

    • If Leaky ReLU is being used in the conv53 layer, consider switching to ReLU, which doesn’t have the slope parameter and is fully supported by Hailo’s quantization process.
  2. Modify the Model:

    • If possible, retrain or fine-tune the model and replace Leaky ReLU with ReLU in the layer configuration. In Ultralytics YOLOv8, this might require adjusting the model architecture during training.
  3. Custom Quantization Settings:

    • If changing the activation function is not an option, you may need to experiment with the quantization settings in the Hailo SDK to handle this specific case, although it’s recommended to simplify the activation function for better compatibility.

Conclusion:

The quantization failure is due to an unsupported slope parameter in the activation function. Switching to ReLU in the conv53 layer should resolve the issue. If you retrain or modify the model to use ReLU, you should be able to complete the quantization process successfully.

Let me know if you need further assistance or if you’d like more details on modifying the model!

Best regards,
Omri

Thank you for your support @omria
I have followed this topic and it solved my problem.

1 Like

I previously encountered the same issue with the Conv 41 layer. The error can be resolved by updating both ONNX and ONNX Runtime to higher versions and setting the IR version to 9.

1 Like

hello, This error caused by the IR version that you’re exporting. I think you use ultralytics to export your yolo model to onnx format. This method exports ir_version=10. You have to use onnx library to downgrade your ir version.

# Convert the model to IR version 8
converted_model = version_converter.convert_version(model, 8)

ir version 8 is worked for me.