YoloV11 On Raspberry Pi

Hi! I’m trying to optimize my YoloV11 file using the DFC. I have already converted my .pt file into an ONNX, the parsing command ran fine, but am struggling to optimize my .har file.

This is the command I’m running in my Google colab notebook:

  --hw-arch hailo8 \
  --calib-set-path /content/calibration_data.npy \
  --output-har-path /content/optimized_model.har \
  --model-script /content/yolo11_finetune.alls \
  /content/model.har

I’ve taken out some layers, and so now my Detection layers are 10 and 13, here is the relevant part from my yaml file:
- [[10, 13], 1, Detect, [nc]] # 14: Detect(P4, P5)

For my .alls file, I’ve copied the general format of the .alls here: hailo_model_zoo/hailo_model_zoo/cfg/alls/hailo8/base/yolov6n.alls at master · hailo-ai/hailo_model_zoo · GitHub

Mine is as follows:

yolo11_finetune.alls

nms_postprocess("/content/nms_layer_config.json", yolov11s, engine=cpu)

model_optimization_config(calibration, batch_size=2, calibset_size=64)

post_quantization_optimization(
    finetune,
    policy=enabled,
    loss_factors=[0.5, 1.0],
    dataset_size=4000,
    epochs=10,
    learning_rate=1e-4,
    loss_layer_names=["/model.10/cv2/act/Mul_output_0", "/model.13/cv2/act/Mul_output_0"],
    loss_types=[l2rel, l2rel]
)

context_switch_param(mode=disabled)

performance_param(compiler_optimization_level=max)

For my nms_layer_config.json referenced in my .alls file.

nms_layer_config = {
    "nms_scores_th": 0.3,
    "nms_iou_th": 0.7,
    "image_dims": [640, 640],
    "max_proposals_per_class": 100,
    "classes": 1,
    "regression_length": 4,
    "background_removal": False,
    "background_removal_index": 0,
    "bbox_decoders": [
        {
            "name": "bbox_decoder_P4",
            "stride": 16,
            "reg_layer": "/model.10/cv2/act/Mul_output_0",
            "cls_layer": "/model.10/cv2/act/Sigmoid_output_0"
        },
        {
            "name": "bbox_decoder_P5",
            "stride": 32,
            "reg_layer": "/model.13/cv2/act/Mul_output_0",
            "cls_layer": "/model.13/cv2/act/Sigmoid_output_0"
        }
    ]
}

Basically, my issue is that it says:
raise AllocatorScriptParserException(f"Invalid scope name {layer_parts[0]} exists")
which means that it can’t find the layer referenced by I guess both my nms layer config and .alls file. I used netron.app on my onnx file to find those layer names, but I wonder if the naming conventions for the layer outputs are different or something when it’s translated into a .har file?

For reference my parsing step is here:

!hailo parser onnx /content/model.onnx \
  --har-path /content/model.har \
  --start-node-names input \
  --end-node-names /model.23/Concat_3 \
  --hw-arch hailo8 \
  -y

If anyone has any ideas on how I could find the appropriate layer names that would be great! I tried looking around but couldn’t find anything helpful to display the appropriate layers that it should be referring to.