Custom Yolov8n_seg: No valid partition found

Hi
I am trying to generate a .HEF file of a yolov8n_seg model trained on a custom dataset with 4 classes.

I modified the yaml and the .alls files as follows

yaml file

base:
- base/wro_yolov8_seg.yaml
network:
  network_name: wro_yolov8n_seg
paths:
  alls_script: wro_yolov8n_seg.alls
  network_path:
  - models_files/InstanceSegmentation/coco/yolov8/yolov8n/pretrained/2023-03-06/yolov8n-seg.onnx
  url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/InstanceSegmentation/coco/yolov8/yolov8n/pretrained/2023-03-06/yolov8n-seg.zip
parser:
  nodes:
  - null
  - - /model.22/Concat_6
    - /model.22/proto/cv3/act/Mul
info:
  task: instance segmentation
  input_shape: 640x640x3
  output_shape: 8400x40x1 160x160x32x1
  operations: 12.04G
  parameters: 3.4M
  framework: pytorch
  training_data: coco instances train2017
  validation_data: coco instances val2017
  eval_metric: mAP
  full_precision_result: 30.32
  source: https://github.com/ultralytics/ultralytics
  license_url: https://github.com/ultralytics/ultralytics/blob/main/LICENSE
  license_name: GPL-3.0

The .alls file is

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])

I removed the

change_output_activation(conv45, sigmoid)
change_output_activation(conv61, sigmoid)
change_output_activation(conv74, sigmoid)

as I didn’t if they are needed for my model.

Then I ran the pars and optimize commands inside the docker AI Suite image

parse

hailomz parse --yaml /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/networks/wro_yolov8n_seg.yaml --ckpt best.onnx --start-node-names /model.0/conv/Conv --end-node-names /model.22/Concat /model.22/proto/cv3/act/Mul /model.22/Concat_4 --hw-arch hailo8

Initially, i set the end node names as output0 and output as I saw them in Netron, but the parser gave an error and suggested the names in the above command. It worked and produced the .har file

Then I executed the optimize step

hailomz optimize --har wro_yolov8n_seg.har --hw-arch hailo8 --yaml /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/networks/wro_yolov8n_seg.yaml --ckpt best.onnx --calib-path images/ --end-node-names /model.22/Concat /model.22/proto/cv3/act/Mul /model.22/Concat_4 --classes 4 --performance

Initially it gave an error, then I executed it again and it produced the .har file

Then I executed the compile command

hailomz compile  --yaml /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/networks/wro_yolov8n_seg.yaml --ckpt best.onnx --hw-arch hailo8 --end-node-names /model.22/Concat /model.22/proto/cv3/act/Mul /model.22/Concat_4 --har wro_yolov8n_seg.har --calib-path images/  --performance --classes 4

But this time it consistently gave the following error.

[info] Loading model script commands to wro_yolov8n_seg from /local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/wro_yolov8n_seg.alls
[info] ParsedPerformanceParam command, setting optimization_level(max=2)
[info] Appending model script commands to wro_yolov8n_seg from string
[info] ParsedPerformanceParam command, setting optimization_level(max=2)
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[error] Mapping Failed (allocation time: 3s)
No valid partition found

[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: No valid partition found

I have no cluse why it fails.

Any help is highly appreciated.

Hey @mzahana

This error typically shows up when the Hailo hardware is having trouble fitting your network’s architecture into its memory and processing constraints. Let me suggest a few things you can try:

First, double-check those end-node-names in your YAML file and commands. Since you mentioned you had to adjust these earlier for your segmentation outputs, make sure they’re still pointing to the right output layers. Even a small mismatch here can cause partition issues.

I notice you’re working with a normalization1 function in your .alls file that’s set for [0, 255]. If your model is expecting inputs normalized to [0, 1], you might want to adjust that to:

normalization([0.0, 0.0, 0.0], [1.0, 1.0, 1.0])

Also, about those output activations - you removed settings for conv45, conv61, and conv74. Are any of these connected to your segmentation outputs? If they are, you might need to put those activation settings back in.

If you’re still having trouble, you could try:

  • Reducing the input resolution (maybe try something smaller than 640x640)
  • Lowering the optimization_level in your .alls script
  • Running the compile command in verbose mode to see exactly where it’s failing

Hi @omria

Thanks for your reply.

My model is simply yolov8n-seg but trained on custom dataset and number of classes is 4. I didn’t change the model architecture. I used Ultralytics for training. I am not even sure if I need to customize the model yaml and alls files in the Halio model zoo package, or I can just use the same ones you provided for Yolov8n-seg model.

I am using the AI suit Docker image inside WSL2 with Nvidia RTX 4070 GPU.