Adding NMS Layer to custom model

Hello. I have my own onnx model and tried to quantize/compile it to a .hef file. However when i run the basic example with this model

python basic_pipelines/detection.py --input resources/detection0.mp4 --hef test/fish.hef

i get the error

NMS score threshold is set, but there is no NMS output in this model.
CHECK_SUCCESS failed with status=6

What is the problem here?

I looked at lot in the doku and tried the steps in the tutorial “Adding Non-Maximum Suppression (NMS) Layer Through Model Script Commands” in hopes this would solve above error. However, my runner also throws errors

AllocatorScriptParserException: Cannot infer bbox conv layers automatically. Please specify the bbox layer in the json configuration file

thanks in advance for help

Hi @kamper,
It’s difficult to say without seeing your model, but in principal the detection.py example works only for models that are compiled with Hailo-NMS. The are a number of supported detection models that have this feature enabled and they can be found here:

If your ONNX have the same architecture and structure as one of the models in the link I provided, then during the compilation process to a HEF file you’ll have the option to add the Hailo-NMS and have your model work successfully with the detection.py example. If not, then the example as won’t work out of the box for your model.

Can you specify exactly from which git repo have you taken this example?

Regards,

actually I used YOLOv8n and pytorch to retrain a default yolov8n.pt model. I then exported the trained model to onnx format. And now i wanted to convert it so i can run it fast on the Raspberry Pi Ai Kit.

from ultralytics import YOLO

model = YOLO("yolov8n.pt")

model.train(...)  # with annotated data

model.export(format="onnx")

With this onnx model I tried to convert to .hef. Is this not a valid workflow? I know you can use models from the model zoo as a basis and finetune them somehow, but since I already have a working model on PC (and on the Pi but 130ms per frame is slow) I thought it would be the easiest to convert this one

This flow is also okay.
The easiest way is to use the model-zoo retraining dockers.
I believe that what happens here, is that some layer names is changed, and with this the default config on the NMS layer is not able to find the right layers. So you should identify the end nodes of the model, and place it in the json mentioned by @Omer :slight_smile:
image