Hailo is not inferring bounding box convolution layers automatically

 Translation started on ONNX model best
[info] Restored ONNX model best (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.24)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: 'images': 'best/input_layer1'.
[info] End nodes mapped from original model: '/model.22/cv2.0/cv2.0.2/Conv', '/model.22/cv3.0/cv3.0.2/Conv', '/model.22/cv2.1/cv2.1.2/Conv', '/model.22/cv3.1/cv3.1.2/Conv', '/model.22/cv2.2/cv2.2.2/Conv', '/model.22/cv3.2/cv3.2.2/Conv'.
[info] Translation completed on ONNX model best (completion time: 00:00:01.00)
[info] Saved HAR to: /home/wdeww/itona-exporter-executor/src/best_hailo_model.har
[info] Loading model script commands to best from string
[info] The layer best/conv41 was detected as reg_layer.
2025-02-10 20:01:04,190 - data_augmentation - ERROR - failed exporting: Cannot infer bbox conv layers automatically. Please specify the bbox layer in the json configuration file.

Although YOLOv8 was detected by the Hailo compiler, it fails to generate the corresponding JSON.
How can I fix this or create an appropriate JSON file?

Hey @wdeww_wdew ,

Welcome to the Hailo Community!

You need the following layers to properly set up Hailo post-processing:

/model.22/cv2.0/cv2.0.2/Conv
/model.22/cv3.0/cv3.0.2/Conv
/model.22/cv2.1/cv2.1.2/Conv
/model.22/cv3.1/cv3.1.2/Conv
/model.22/cv2.2/cv2.2.2/Conv
/model.22/cv3.2/cv3.2.2/Conv

These layers correspond to the bounding box outputs in the YOLOv8 model. To implement them:

JSON configuration:

{
    "bbox_layer": [
        "/model.22/cv2.0/cv2.0.2/Conv",
        "/model.22/cv3.0/cv3.0.2/Conv",
        "/model.22/cv2.1/cv2.1.2/Conv",
        "/model.22/cv3.1/cv3.1.2/Conv",
        "/model.22/cv2.2/cv2.2.2/Conv",
        "/model.22/cv3.2/cv3.2.2/Conv"
    ],
    "input_layer": "best/input_layer1",
    "postprocessing": {
        "nms": {
            "iou_threshold": 0.45,
            "score_threshold": 0.3
        }
    }
}

Thanks for replying.
i already specified as you see from the logs the proper layers with this code

hn, npz = self.runner.translate_onnx_model(
                 onnx_model_path,
                 self.model_name,
                 end_node_names=['/model.22/cv2.0/cv2.0.2/Conv' ,'/model.22/cv3.0/cv3.0.2/Conv', '/model.22/cv2.1/cv2.1.2/Conv' ,'/model.22/cv3.1/cv3.1.2/Conv' ,'/model.22/cv2.2/cv2.2.2/Conv' ,'/model.22/cv3.2/cv3.2.2/Conv']
                )

as far the configuration i inspected the json and put it as such
tell me please if this is correcr ot no

{
    "nms_scores_th": 0.001,
    "nms_iou_th": 0.7,
    "image_dims": [
        640,
        640
    ],
    "max_proposals_per_class": 100,
    "classes": 1,
    "regression_length": 16,
    "background_removal": false,
    "background_removal_index": 0,
    "bbox_decoders": [
        {
            "name": "bbox_decoder_8",
            "stride": 8,
            "reg_layer":"conv41", 
            "cls_layer":"conv42"
        },
        {
            "name": "bbox_decoder_16",
            "stride": 16,
            "reg_layer":"conv52", 
            "cls_layer":"conv53"
        },
        {
            "name": "bbox_decoder_32",
            "stride": 32,
            "reg_layer":"conv62", 
            "cls_layer":"conv63"
        }
    ]
}

I’m asking about an automatic solution that doesn’t have specify the json manually