Convert Yolo Pytorch model to HEF for Raspberry Pi 5 with Hailo8L

Hello, (sorry for my English)

I’m trying to adapt a custom model from data in YOLO format (v8n), and to use it on my raspberry pi 5 with a HAILO 8L chip.

I found this great tutorial: Tutorial of AI Kit with Raspberry Pi 5 about YOLOv8n object detection | Seeed Studio Wiki
but it doesn’t work, and I’ve really tried every possible approach, nothing ever works.

I’ve been through a lot of tutorials and topics, and I really don’t know what to do anymore.

I train my model with YOLO (which works well for detection):

model.train(
data=os.getenv(‘TRAIN_DATASET_PATH’),
epochs=50,
imgsz=1024,
name=os.getenv(‘TRAIN_DATASET_NAME’),
verbose=True,
save=False,
project=runs_dir,
exist_ok=True,
device=os.getenv(‘TRAIN_DEVICE’), # GPU
workers=8
)`

I’ve managed to convert my PyTorch model (.pt) to ONNX :

.export(format=“onnx”, opset=11, imgsz=1024)

I even tried to quantize it
quant_pre_process(input_model_path=onnx_model_path, output_model_path=onnx_preprocessed_model_path) quantize_dynamic(model_input=onnx_preprocessed_model_path, model_output=onnx_quantized_model_path, per_channel=False, weight_type=QuantType.QUInt8)

I can even convert the original ONNX to HAR:

hailo parser onnx --hw-arch hailo8l --har-path guinea-pig-chons-v12.har -y guinea-pig-chons-v12.onnx

On the other hand, if I try with “quantized”, I get this Warning and then it crashes:

WARNING: failed to run “Reshape” op (name is “/model.9/cv2/conv/Conv_output_0_bias_reshape_output”), skip…
File “/local/workspace/hailo_virtualenv/lib/python3.8/site-packages/hailo_sdk_client/model_translator/onnx_translator/onnx_graph.py”, line 2908, in _is_spatial_flatten_with_features_to_heads_reshape
pred = next(iter(self.graph.predecessors(self)))
StopIteration

And if I try to convert HAR to HEF it doesn’t work either:

hailo compiler --hw-arch hailo8l guinea-pig-chons-v12.har

I have this: “Model requires quantized weights in order to run on HW, but none were given. Did you forget to quantize?”

I tried the “optimize
hailomz optimize yolov8n --hw-arch hailo8l --har guinea-pig-chons-v12.har

which returns this:
hailo_sdk_common.hailo_nn.exceptions.HailoNNException: The layer named yolov8n/conv41 doesn’t exist in the HN

Or this: “hailomz compile yolov8n --hw-arch hailo8l --har ./yolov8n.har ”

I tried the “compile” command

hailomz compile --ckpt guinea-pig-chons-v12.onnx --calib-path data --yaml hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov8n.yaml

or this

hailomz compile yolov8n --hw-arch hailo8l --har guinea-pig-chons-v12_2.har

which returns this to me:
ValueError: Tried to convert ‘input’ to a tensor and failed. Error: None values not supported.

Hi @alkemist,
The Dataflow Compiler pipeline goes in a single direction:
Parser: input is ONNX\tflite file, output is a parsed HAR file
Optimization: input is a parsed HAR file, output is an optimized HAR file (weights are quantized to 8-bit).
Compilation: input is an optimized HAR file, output is a HEF file (binary file).

From what I see, you tried to use wrong inputs for the steps + a parsing error that is know.
For yolov8, you need to define manually the end nodes. You can find the correct end nodes in the yolov8n .yaml file in the Hailo Model Zoo:

Regards,

Thanks, I understand the process and the steps better.
I’ve been exploring this field for a while, it’s still very vague and mysterious, but exciting.

At the moment I’m trying to go step by step

I parse
hailo parser onnx --hw-arch hailo8l --har-path guinea-pig-chons-v12.har -y guinea-pig-chons-v12.onnx

I notice the start and end nodes:
`[info] Start nodes mapped from original model : ‘images’ : ‘guinea-pig-chons-v12/input_layer1’.
[info] End nodes mapped from original model : ‘/model.22/dfl/Reshape_1’, ‘/model.22/Sigmoid’.

And I optimize:
hailomz optimize --hw-arch hailo8l --har ./guinea-pig-chons-v12.har yolov8n

And I get this error:
hailo_sdk_common.hailo_nn.exceptions.HailoNNException: The layer named yolov8n/conv41 doesn't exist in the HN

I visualize the ONNX, and I notice the conv41
postimg.cc wRz5992n

On the ONNX, it looks like it’s here
postimg.cc yJJyPQWp

(Apparently I can put neither link nor image, I do not understand)

But I don’t understand why it accepts conv39 and not conv41.

I notice that on the yaml model yolov8n, in the list of nodes there are CONVs, for example: “/model.22/cv3.1/cv3.1.2/Conv”.
What do the numbers correspond to and does it have anything to do with the fact that it accepts conv39 but not conv41?

On the jupyter, for the PARSER, it indicates a start and end node, and a dictionary to convert the start node into a shape
runner.translate_onnx_model(
onnx_path,
onnx_model_name,
start_node_names=[“input.1”],
end_node_names=[“191”],
net_input_shapes={“input.1”: [1, 3, 224, 224]},
)`
But how do I know what to put in these parameters?

Hi @alkemist,
This numbers are just the name of the layer in the ONNX from the Hailo Model zoo. You can take it from there and compare to you model.

You can look at this article to better understand the yolov8 parsing:

Regards,

Well, I managed to convert my model using this command

hailomz compile --ckpt guinea-pig-chons-v12.onnx --hw-arch hailo8l --calib-path data --yaml hef_config.yaml --classes 4 --performance

this file hef_config.yaml

- base/yolov8.yaml
postprocessing:
  device_pre_post_layers:
    nms: true
  hpp: true
network:
  network_name: yolov8n
paths:
  network_path:
  - models_files/ObjectDetection/Detection-COCO/yolo/yolov8n/2023-01-30/yolov8n.onnx
  alls_script: yolov8n.alls
  url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ObjectDetection/Detection-COCO/yolo/yolov8n/2023-01-30/yolov8n.zip
parser:
  nodes:
  - null
  - - /model.22/cv2.0/cv2.0.2/Conv
    - /model.22/cv3.0/cv3.0.2/Conv
    - /model.22/cv2.1/cv2.1.2/Conv
    - /model.22/cv3.1/cv3.1.2/Conv
    - /model.22/cv2.2/cv2.2.2/Conv
    - /model.22/cv3.2/cv3.2.2/Conv
info:
  task: object detection
  input_shape: 640x640x3
  output_shape: 80x5x100
  operations: 8.74G
  parameters: 3.2M
  framework: pytorch
  training_data: coco train2017
  validation_data: coco val2017
  eval_metric: mAP
  full_precision_result: 37.23
  source: https://github.com/ultralytics/ultralytics
  license_url: https://github.com/ultralytics/ultralytics/blob/main/LICENSE
  license_name: GPL-3.0

Deduced from this other topic

Well, now I have other worries about using this model, but that’s another topic.