Compilation of Yolov8 network

Hello,

I am trying to compile an har network from yolov8 to an usable hef model. But i am running into some errors.

I have already tried to different compilers .27 and .28 which does not change anything. furthermore have i tried using two different models. one trained inside the given docker and the other the casual route. This doesnt make a difference in being able to compile it.

My errors are inside .27 docker trained network:

[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[error] Mapping Failed (allocation time: 10s)
No successful assignment for: format_conversion1_defuse_reshape_hxf_to_w_transposed, format_conversion1_defuse_width_feature_reshape, concat17

and with .28 with docker trained network:

[info] To achieve optimal performance, set the compiler_optimization_level to "max" by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[error] Mapping Failed (allocation time: 10s)
No successful assignment for: format_conversion1, concat17

when trying to compile the yolo network that hasnt been trained inside the docker i get this error, with the .28 compiler:

[info] To achieve optimal performance, set the compiler_optimization_level to "max" by adding performance_param(compiler_optimization_level=max) to the model script. Note that this may increase compilation time.
[info] Loading network parameters
[info] Starting Hailo allocation and compilation flow
[error] Mapping Failed (allocation time: 3m 9s)
No successful assignment for: ew_sub_softmax1

[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: No successful assignment for: ew_sub_softmax1

I hope someone can help me solving this problem!

Hi, can you share the command that you’ve executed, and more details on the model that you’re using?

the code i have used:

from hailo_sdk_client import ClientRunner
import config

model_name = config.ONNX_MODEL_NAME
quantized_model_har_path = f"{model_name}_quantized_model.har"

runner = ClientRunner(har=quantized_model_har_path)

hef = runner.compile()

file_name = f"{model_name}.hef"
with open(file_name, "wb") as f:
    f.write(hef)

har_path = f"{model_name}_compiled_model.har"
runner.save_har(har_path)

For my model i am using the yolov8m model which has been retrained on a dataset with 4 classes in imgsz 480

If you need more data feel free to ask

This seems to be related to nodes that were included in the graph, check the end-node-names given

these are my end node names, they where given by the parsing part:

START_NODE_NAMES = [“images”]
END_NODE_NAMES = [
“/model.22/Concat_3”
]

Okay thank you, this was indeed the problem when redoing the parsing of the file it threw an message which i did not notice:

[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.

when changing this it works as aspected

1 Like

Okay coming back to this, i cant run the model on the hailo chip, i can load it but i get the error:

NMS score threshold is set, but there is no NMS output in this model.
CHECK_SUCCESS failed with status=6

I have seen a forum post to change the yaml file of the model, but i cant seem to figure out where i can find this, or how i can even change this. cause i have pip install

The yaml files are all in the hailo_model_zoo directory. When you use the Hailo AI Software Suite docker go to:

/local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/networks/

Here are the files on GitHub

GitHub - Hailo Model Zoo - networks

The yaml file points to a model script (alls file)

You can find them

/local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/alls

GitHub - Hailo Model Zoo - alls

This file in turn uses a file called model_name_nms_config.json.

/local/workspace/hailo_model_zoo/hailo_model_zoo/cfg/postprocess_config

GitHub - Hailo Model Zoo - postprocess_config

Make a local copy of the alls and json file, update the link and the json file and use them with your conversion script.

When you use hailomz you can make a local copy of the yaml, modify the yaml file with the link to the alls and provide that when you call hailomz compile.

1 Like

okay that seems to work but compiling throws errors for the retraining docker network:

hailo_model_optimization.acceleras.utils.acceleras_exceptions.NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8s/conv63 due to unsupported required slope. Desired shift is 14.0, but op has only 8 data bits. This error raises when the data or weight range are not balanced. Mostly happens when using random calibration-set/weights, the calibration-set is not normalized properly or batch-normalization was not used during training.

You can find the solution in the following thread reply #25.

Hailo Community - Problem with model optimization - 1648 reply #25

That is what i am doing lets specify my commands:

hailomz compile --ckpt=best.onnx --hw-arch hailo8l --calib-path data/ --classes 4 --performance --yaml hailo_model_zoo/cfg/networks/yolov8s.yaml 

with this json:

{
	"nms_scores_th": 0.2,
	"nms_iou_th": 0.7,
	"image_dims": [
		480,
		480
	],
	"max_proposals_per_class": 100,
	"classes": 4,
	"regression_length": 16,
	"background_removal": false,
	"background_removal_index": 0,
	"bbox_decoders": [
		{
			"name": "bbox_decoder41",
			"stride": 8,
			"reg_layer": "conv41",
			"cls_layer": "conv42"
		},
		{
			"name": "bbox_decoder52",
			"stride": 16,
			"reg_layer": "conv52",
			"cls_layer": "conv53"
		},
		{
			"name": "bbox_decoder62",
			"stride": 32,
			"reg_layer": "conv62",
			"cls_layer": "conv63"
		}
	]
}

and this .alls

quantization_param([conv63], force_range_out=[0.0, 1.0]) 
normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv42, sigmoid)
change_output_activation(conv53, sigmoid)
change_output_activation(conv63, sigmoid)
nms_postprocess("../../postprocess_config/yolov8s_nms_config.json", meta_arch=yolov8, engine=cpu)

and my yaml:

base:
- base/yolov8.yaml
postprocessing:
  device_pre_post_layers:
    nms: true
  hpp: true
network:
  network_name: yolov8s
paths:
  network_path:
  - best.onnx
  alls_script: hailo_model_zoo/cfg/alls/generic/yolov8s.alls
parser:
  nodes:
  - null
  - - /model.22/cv2.0/cv2.0.2/Conv
    - /model.22/cv3.0/cv3.0.2/Conv
    - /model.22/cv2.1/cv2.1.2/Conv
    - /model.22/cv3.1/cv3.1.2/Conv
    - /model.22/cv2.2/cv2.2.2/Conv
    - /model.22/cv3.2/cv3.2.2/Conv
info:
  task: object detection
  input_shape: 480x480x3
  output_shape: 80x5x100
  operations: 28.6G
  parameters: 11.2M
  framework: pytorch
  training_data: coco train2017
  validation_data: coco val2017
  eval_metric: mAP
  full_precision_result: 44.75
  source: https://github.com/ultralytics/ultralytics
  license_url: https://github.com/ultralytics/ultralytics/blob/main/LICENSE
  license_name: GPL-3.0

Can you try first without the --performance flag?

1 Like

@koenvanwijlick
Have you been able to resolve the issues?
I am completely new. I do understand few of the things. But not all there are multiple things. Although everything are documented on hailo. But getting errors. I am just trying to convert the yolov8 model. But getting some errors. Not able to compile to har with python codes(from hailo docs). But i am able to convert har using dfc studio. But not able to proceed further due to errors in quantization.
It would be great if you could share step by step guide. That would be really great. Struggling to get a compiled model from last 2 weeks.
I am also new to NN so can you share what resources do i need to take a look and study. I am exploring ML on raspberry pi.

Thank you.