Custom model for Raspberry Pi 5 Fails on Hailo Dataflow Compiler

I am trying to create a custom model and run it on the Raspberry Pi 5. But I am getting an error when running the Hailo Dataflow Compiler. I an using Yolov8n. Not sure what I’m doing wrong.

These are the steps I’ve taken and the error shown bellow.

Docker


docker run --name "yolov8" -it --gpus all --ipc=host -v /home/sam/ai_kit:/workspace/ai_kit yolov8:v0

cd /workspace/ai_kit

yolo detect train data=config.yaml model=yolov8n.pt name=retrain_yolov8n epochs=1 batch=16

cp /ultralytics/runs/detect/train/weights/best.onnx /workspace/ai_kit/

WSL


sudo apt update

sudo apt install build-essential

sudo apt install python3.10 python3.10-venv python3.10-dev

Setup Environment

python3.10 -m venv venv && source venv/bin/activate

cd ai_kit

sudo apt-get install graphviz graphviz-dev pythonpy python3-tk

sudo apt install build-essential graphviz libgraphviz-dev

pip install pygraphviz

pip install compiler/hailo_dataflow_compiler-3.27.0-py3-none-linux_x86_64.whl

Install hailo_model_zoo

git clone https://github.com/hailo-ai/hailo_model_zoo.git

cd hailo_model_zoo && pip install -e .

sudo apt-get update && sudo apt-get install libgl1-mesa-glx

Compile (Hailo Dataflow Compiler v3.27.0)

(venv) sam@Core:~/ai_kit$ hailomz compile --ckpt best.onnx --calib-path /home/sam/ai_kit/datasets/images/train --yaml custom_yolov8n.yaml --classes 2 --hw-arch hailo8l
<Hailo Model Zoo INFO> Start run for network yolov8n ...
<Hailo Model Zoo INFO> Initializing the hailo8l runner...
[info] Translation started on ONNX model yolov8n
[info] Restored ONNX model yolov8n (completion time: 00:00:00.05)
[info] Extracted ONNXRuntime meta-data for Hailo model (completion time: 00:00:00.28)
[info] NMS structure of yolov8 (or equivalent architecture) was detected.
[info] In order to use HailoRT post-processing capabilities, these end node names should be used: /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv.
[info] Start nodes mapped from original model: 'images': 'yolov8n/input_layer1'.
[info] End nodes mapped from original model: '/model.22/cv2.0/cv2.0.2/Conv', '/model.22/cv3.0/cv3.0.2/Conv', '/model.22/cv2.1/cv2.1.2/Conv', '/model.22/cv3.1/cv3.1.2/Conv', '/model.22/cv2.2/cv2.2.2/Conv', '/model.22/cv3.2/cv3.2.2/Conv'.
[info] Translation completed on ONNX model yolov8n (completion time: 00:00:00.96)
[info] Saved HAR to: /home/sam/ai_kit/yolov8n.har
<Hailo Model Zoo INFO> Preparing calibration data...
[info] Loading model script commands to yolov8n from /home/sam/ai_kit/hailo_model_zoo/hailo_model_zoo/cfg/alls/generic/yolov8n.alls
[info] Loading model script commands to yolov8n from string
[info] The layer yolov8n/conv41 was detected as reg_layer.
[info] The layer yolov8n/conv52 was detected as reg_layer.
[info] The layer yolov8n/conv62 was detected as reg_layer.
[info] Starting Model Optimization
[warning] Reducing optimization level to 0 (the accuracy won't be optimized and compression won't be used) because there's less data than the recommended amount (1024), and there's no available GPU
[info] Model received quantization params from the hn
Traceback (most recent call last):
  File "/home/sam/ai_kit/venv/bin/hailomz", line 33, in <module>
    sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
  File "/home/sam/ai_kit/hailo_model_zoo/hailo_model_zoo/main.py", line 122, in main
    run(args)
  File "/home/sam/ai_kit/hailo_model_zoo/hailo_model_zoo/main.py", line 111, in run
    return handlers[args.command](args)
  File "/home/sam/ai_kit/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 250, in compile
    _ensure_optimized(runner, logger, args, network_info)
  File "/home/sam/ai_kit/hailo_model_zoo/hailo_model_zoo/main_driver.py", line 91, in _ensure_optimized
    optimize_model(
  File "/home/sam/ai_kit/hailo_model_zoo/hailo_model_zoo/core/main_utils.py", line 321, in optimize_model
    runner.optimize(calib_feed_callback)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1674, in optimize
    self._optimize(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
    return func(self, *args, **kwargs)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_sdk_client/runner/client_runner.py", line 1542, in _optimize
    self._sdk_backend.full_quantization(calib_data, data_type=data_type, work_dir=work_dir)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 910, in full_quantization
    self._full_acceleras_run(self.calibration_data, data_type)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1034, in _full_acceleras_run
    optimization_flow.run()
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 246, in run
    self.step1()
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/tools/subprocess_wrapper.py", line 63, in parent_wrapper
    func(self, *args, **kwargs)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 253, in step1
    self.setup_optimization()
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 269, in setup_optimization
    self._create_mix_precision()
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/flows/optimization_flow.py", line 542, in _create_mix_precision
    algo = CreateMixedPrecision(model=self.model,
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/algorithms/mixed_precision/create_mixed_precision.py", line 42, in __init__
    self._model.compute_output_shape(shapes)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 895, in compute_output_shape
    return self.compute_and_verify_output_shape(input_shape, verify_layer_inputs_shape=False)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/hailo_model.py", line 929, in compute_and_verify_output_shape
    layer_output_shape = layer.compute_output_shape(layer_input_shapes)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/keras/engine/base_layer.py", line 917, in compute_output_shape
    outputs = self(inputs, training=False)
  File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/tmp/__autograph_generated_filefyb061mg.py", line 41, in tf__call
    outputs = ag__.converted_call(ag__.ld(self).call_core, (ag__.ld(inputs), ag__.ld(training)), dict(**ag__.ld(kwargs)), fscope)
  File "/tmp/__autograph_generated_file2sxzlcah.py", line 90, in tf__call_core
    ag__.if_stmt(ag__.ld(self).postprocess_type in [ag__.ld(PostprocessType).NMS, ag__.ld(PostprocessType).BBOX_DECODER], if_body_3, else_body_3, get_state_3, set_state_3, ('do_return', 'retval_'), 2)
  File "/tmp/__autograph_generated_file2sxzlcah.py", line 22, in if_body_3
    retval_ = ag__.converted_call(ag__.ld(self).bbox_decoding_and_nms_call, (ag__.ld(inputs),), dict(is_bbox_decoding_only=ag__.ld(self).postprocess_type == ag__.ld(PostprocessType).BBOX_DECODER), fscope)
  File "/tmp/__autograph_generated_file_z3yy88c.py", line 99, in tf__bbox_decoding_and_nms_call
    ag__.if_stmt(ag__.ld(self).meta_arch in [ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5, ag__.ld(NMSOnCpuMetaArchitectures).YOLOX], if_body_4, else_body_4, get_state_4, set_state_4, ('decoded_bboxes', 'detection_score', 'do_return', 'retval_', 'inputs'), 4)
  File "/tmp/__autograph_generated_file_z3yy88c.py", line 96, in else_body_4
    ag__.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV5_SEG, if_body_3, else_body_3, get_state_3, set_state_3, ('decoded_bboxes', 'detection_score', 'do_return', 'retval_'), 4)
  File "/tmp/__autograph_generated_file_z3yy88c.py", line 93, in else_body_3
    ag__.if_stmt(ag__.ld(self).meta_arch == ag__.ld(NMSOnCpuMetaArchitectures).YOLOV8, if_body_2, else_body_2, get_state_2, set_state_2, ('decoded_bboxes', 'detection_score'), 2)
  File "/tmp/__autograph_generated_file_z3yy88c.py", line 69, in if_body_2
    (decoded_bboxes, detection_score) = ag__.converted_call(ag__.ld(self).yolov8_decoding_call, (ag__.ld(inputs),), None, fscope)
  File "/tmp/__autograph_generated_fileptahs8gt.py", line 66, in tf__yolov8_decoding_call
    decoded_bboxes = ag__.converted_call(ag__.ld(tf).expand_dims, (ag__.ld(decoded_bboxes),), dict(axis=2), fscope)
ValueError: Exception encountered when calling layer "yolov8n/yolov8_nms_postprocess" (type HailoPostprocess).

in user code:

    File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/base_hailo_none_nn_core_layer.py", line 34, in call  *
        outputs = self.call_core(inputs, training, **kwargs)
    File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 72, in call_core  *
        is_bbox_decoding_only=self.postprocess_type == PostprocessType.BBOX_DECODER)
    File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 100, in bbox_decoding_and_nms_call  *
        decoded_bboxes, detection_score = self.yolov8_decoding_call(inputs)
    File "/home/sam/ai_kit/venv/lib/python3.10/site-packages/hailo_model_optimization/acceleras/hailo_layers/hailo_postprocess.py", line 265, in yolov8_decoding_call  *
        decoded_bboxes = tf.expand_dims(decoded_bboxes, axis=2)

    ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.


Call arguments received by layer "yolov8n/yolov8_nms_postprocess" (type HailoPostprocess):
  • inputs=['tf.Tensor(shape=(None, 80, 80, 64), dtype=float32)', 'tf.Tensor(shape=(None, 80, 80, 1), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 64), dtype=float32)', 'tf.Tensor(shape=(None, 40, 40, 1), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 64), dtype=float32)', 'tf.Tensor(shape=(None, 20, 20, 1), dtype=float32)']
  • training=False
  • kwargs=<class 'inspect._empty'>

My custom_yolov8n.yaml

base:
- base/yolov8.yaml
postprocessing:
  device_pre_post_layers:
    nms: true
  hpp: true
network:
  network_name: yolov8n
paths:
  network_path:
  - /home/sam/ai_kit/best.onnx
  alls_script: yolov8n.alls
  url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ObjectDetection/Detection-COCO/yolo/yolov8n/2023-01-30/yolov8n.zip
parser:
  nodes:
  - null
  - - /model.22/cv2.0/cv2.0.2/Conv
    - /model.22/cv3.0/cv3.0.2/Conv
    - /model.22/cv2.1/cv2.1.2/Conv
    - /model.22/cv3.1/cv3.1.2/Conv
    - /model.22/cv2.2/cv2.2.2/Conv
    - /model.22/cv3.2/cv3.2.2/Conv
info:
  task: object detection
  input_shape: 640x640x3
  output_shape: 80x5x100
  operations: 8.74G
  parameters: 3.2M
  framework: pytorch
  training_data: coco train2017
  validation_data: coco val2017
  eval_metric: mAP
  full_precision_result: 37.23
  source: https://github.com/ultralytics/ultralytics
  license_url: https://github.com/ultralytics/ultralytics/blob/main/LICENSE
  license_name: GPL-3.0

input is images and output output0

Hi Sam,
Thanks for the detailed question. I would go ahead and put one disclaimer that we haven’t tested running the DFC or MZ on WSL2. We know it can run, but it’s part of the product definition. That said, I don’t think that this is the root issue here.

I think that you need to check if these output node names have not changed after you’ve re-trained your model

Also, you can check out this very nice tutorial: