I’ve been developing a product which uses yolov8n-pose model.
This model is not in Hailo Model Zoo, so I parsed, optimized, and compiled it using .yaml file.
I successfully generated the .hef file. However, when I tried to run inference with this model in python, gstreamer library, I encountered the error message shown below.
Running...
terminate called after throwing an instance of 'std::invalid_argument'
what(): Number should be between 0.0 to 1.0.
Aborted
For detail, when I first run my gstreamer code, it initially runs well, (the camera window appears well), but as soon as a person appears in the camera’s view, this program is immediately crashes and throws the error.
Runtime Environment:
- raspberri pi cm5 (with cm5 IO board)
- hailo-8 M.2 AI Acceleration Module
- Linux Debian
- I’m using the official postprocess.so file from Hailo · GitHub
Additionally, when I manually parsed, optimized, and compiled yolov8s-pose model which is in hailo model zoo, it doesn’t work too. However, when I tried to use yolov8s-pose.hef model that is provided by hailo model zoo, It works correctly.
I parsed, optimized, and compiled with below codes.
Parsing
hailomz parse --yaml ./yolov8n_pose.yaml --ckpt ./yolov8n-pose.onnx --hw-arch hailo8
Optimizing
hailo optimize --hw-arch hailo8 --use-random-calib-set ./yolov8n_pose.har
Compiling
hailomz compile --yaml ./yolov8n_pose.yaml --hw-arch hailo8 --har ./yolov8n_pose_optimized.har
yolov8n_pose.yaml
base:
- base/yolov8_pose.yaml
network:
network_name: yolov8n_pose
paths:
alls_script: /local/shared_with_docker/yolov8n_pose.alls
network_path:
- ./yolov8n-pose.onnx
url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/PoseEstimation/yolov8/yolov8m/pretrained/2023-06-11/yolov8m_pose.zip
parser:
nodes:
- null
- - /model.22/cv2.2/cv2.2.2/Conv
- /model.22/cv3.2/cv3.2.2/Conv
- /model.22/cv4.2/cv4.2.2/Conv
- /model.22/cv2.1/cv2.1.2/Conv
- /model.22/cv3.1/cv3.1.2/Conv
- /model.22/cv4.1/cv4.1.2/Conv
- /model.22/cv2.0/cv2.0.2/Conv
- /model.22/cv3.0/cv3.0.2/Conv
- /model.22/cv4.0/cv4.0.2/Conv
info:
task: pose estimation
input_shape: 640x640x3
output_shape: 20x20x64, 20x20x1, 20x20x51, 40x40x64, 40x40x1, 40x40x51, 80x80x64,
80x80x1, 80x80x51
operations: 9.2G
parameters: 3.6M
framework: pytorch
training_data: coco keypoints train2017
validation_data: coco keypoints val2017
eval_metric: mAP
full_precision_result: 50.4
source: https://github.com/ultralytics/ultralytics
license_url: https://github.com/ultralytics/ultralytics/blob/main/LICENSE
license_name: AGPL-3.0
and I also have yolov8_pose.yaml, yolov8n_pose.alls