I created hef after learning with my image, but it doesn't work

hef file generated PC environment: ubuntu 22.04, DFC 3.27version, yolov5_seg,

Problem: When I try to run the hef file I created on the Raspberry Pi 5 ai kit 8L chip, an error like the image occurs.

Methods attempted to solve 1. When converting to onnx 2 hef, change the input_shape and output_shape of the hailo_model_zoo/network/yolov5n_seg.yaml file to 6406403. (I also tried to make it the same as the image size in hailo raspberry pi instance segmentation.py)
Methods attempted to solve 2. To determine if it is a camera problem, run detection.py with the existing hef file and check if it works normally.

onnx 2 hef commands I used: hailomz compile --ckpt ~/yolov5/runs/train/exp3/weights/best.onnx --calib-path ~/yolov5/train/yoju_0805.v2i.yolov8/train/images/ --yaml ~/hailo_ai/hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov5n_seg.yaml --start-node- names /model.0/conv/Conv --end-node-names /model.24/m.0/Conv /model.24/m.1/Conv /model.24/m.2/Conv --classes 1 --hw-arch hailo8l

Hard to tell directly, I would start to eliminate stuf out…

  1. run the hef outside a pipeline with hailortcli run. If this is good, it means that the HEF is good.
  2. remove some parts from the pipe, and see what is the offending piece.

(venv_hailo_rpi5_examples) ntrexlab@raspberrypi5:~/hailo-rpi5-examples $ hailortcli run ~/hailo-rpi5-examples/resources/0819_8l_exp3_yolov5n_seg.hef
Running streaming inference (/home/ntrexlab/hailo-rpi5-examples/resources/0819_8l_exp3_yolov5n_seg.hef):
Transform data: true
Type: auto
Quantized: true
Network yolov5n_seg/yolov5n_seg: 100% | 1315 | FPS: 262.69 | ETA: 00:00:00

Inference result:
Network group: yolov5n_seg
Frames count: 1315
FPS: 262.70
Send Rate: 2582.42 Mbit/s
Recv Rate: 847.36 Mbit/s

I received results showing that my hef file performance is good. But I still get an error when running it. If you delete hailofilter from the pipeline, the video will appear, but yolo will not run. Which pipeline would you modify?

This is my execution command and error message.

(venv_hailo_rpi5_examples) ntrexlab@raspberrypi5:~/hailo-rpi5-examples $ python basic_pipelines/instance_segmentation.py --input resources/Produce.mp4
filesrc location=resources/Produce.mp4 name=src_0 ! queue name=queue_dec264 max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! qtdemux ! h264parse ! avdec_h264 max-threads=2 ! video/x-raw,format=I420 ! videoscale n-threads=2 ! videoconvert n-threads=3 name=src_convert qos=false ! video/x-raw, format=RGB, width=640, height=640, pixel-aspect-ratio=1/1 ! hailomuxer name=hmux hmux.src ! videoconvert n-threads=3 ! hailonet hef-path=/home/ntrexlab/hailo-rpi5-examples/basic_pipelines/…/resources/0819_8l_exp3_yolov5n_seg.hef batch-size=2 force-writable=true ! hailofilter function-name=yolov5seg so-path=/usr/lib/aarch64-linux-gnu/hailo/tappas//post-process/libyolov5seg_post.so qos=false ! identity name=identity_callback ! hailooverlay ! videoconvert n-threads=3 qos=false ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=true text-overlay=False signal-fps-measurements=true
Config file doesn’t exist, using default parameters
0:00:00.139537395 4908 0xaa2ca40 WARN qtdemux qtdemux.c:3238:qtdemux_parse_trex: failed to find fragment defaults for stream 1
0:00:00.139746599 4908 0xaa2ca40 WARN qtdemux qtdemux.c:3238:qtdemux_parse_trex: failed to find fragment defaults for stream 2

Do I need to modify hailofilter’s libyolov5seg_post.so file to run the hef file newly trained with my image?

hailofiler is in the pipeline of class GStreamerInstanceSegmentationApp in instance segmentation.py.

Hi @hyeonju,
I don’t think so, if you’re using the same network as in our model-zoo, the standard filter should support it.

[quote=“hyeonju, post:6, topic:2511, full:true”]
So why does that error occur?
If i remove hailofilter in pipeline, it will run, but yolo will not run. I would appreciate any advice on how to resolve this issue.

The error message below occurs when hailofilter is not removed.

(venv_hailo_rpi5_examples) ntrexlab@raspberrypi5:~/hailo-rpi5-examples $ python basic_pipelines/instance_segmentation.py --input resources/Produce.mp4
filesrc location=resources/Produce.mp4 name=src_0 ! queue name=queue_dec264 max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! qtdemux ! h264parse ! avdec_h264 max-threads=2 ! video/x-raw,format=I420 ! videoscale n-threads=2 ! videoconvert n-threads=3 name=src_convert qos=false ! video/x-raw, format=RGB, width=640, height=640, pixel-aspect-ratio=1/1 ! hailomuxer name=hmux hmux.src ! videoconvert n-threads=3 ! hailonet hef-path=/home/ntrexlab/hailo-rpi5-examples/basic_pipelines/…/resources/0819_8l_exp3_yolov5n_seg.hef batch-size=2 force-writable=true ! hailofilter function-name=yolov5seg so-path=/usr/lib/aarch64-linux-gnu/hailo/tappas//post-process/libyolov5seg_post.so qos=false ! identity name=identity_callback ! hailooverlay ! videoconvert n-threads=3 qos=false ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=true text-overlay=False signal-fps-measurements=true
Config file doesn’t exist, using default parameters
0:00:00.139537395 4908 0xaa2ca40 WARN qtdemux qtdemux.c:3238:qtdemux_parse_trex: failed to find fragment defaults for stream 1
0:00:00.139746599 4908 0xaa2ca40 WARN qtdemux qtdemux.c:3238:qtdemux_parse_trex: failed to find fragment defaults for stream 2

Are you using the same resolution? 640x640?

yes

This is the yaml file used when converting to hef file. I set the image size here.

base:

Maybe a problem with the input Produce.mp4 file?
Are you able to simply play it? Is it indeed an mp4?

Yes, i can play the MP4 video on Raspberry Pi.
Additionally, this video operates normally on the existing Raspberry Pi example (detection.py, instance segmentation.py).

This is my error message.

I made a lane detection model with yolov5n.
Unlike the existing one, it is a single label. In this case, should I modify the libyolov5seg_post.so file?
It is said that the cause of the segmentation fault is accessing the wrong memory.
If I have to create a new .so file, which cpp file should I use in hailo-rpi5-examples?
Can I convert hailo-rpi5-examples/cpp/yolo_hailortcpp.cpp to a .so file and apply it?

this link is yolo_hailortcpp.cpp.

The existing one used 80 labels, but the new model uses 1 label, so is it right to change the code? Is this problem likely to cause a segmentation fault?

Check out this doc for using a retrained (fewer classes) detec tor on top of our examples:
hailo-rpi5-examples/doc/retraining-example.md at main · hailo-ai/hailo-rpi5-examples (github.com)

I followed the refresher manual from yolov5.
The link is below.

I didn’t use Docker when generating the training data.
Is this likely to cause the problem?

No, it will work as well, just a bit more steps on your side.

Please answer my questions one by one.

  1. I know that the yolo model provided by hailo-rpi5-example has 80 classes. However, I only used 1 class when creating a new model. Could this cause a problem?

I think I need to modify libyolov5seg_post.so imported by hailofilter because of this part where the number of classes has changed.

Am I wrong?

I run the example in the link below.

  1. I put the model I created into the instance segmentation.py example and ran it. However, it did not run. You advised me to modify the pipeline.

So I removed hailofilter in the pipeline. Then the video ran. But yolo did not run.

When this situation occurs, please let me know which part of hailofilter I need to modify to be able to run yolo.