Hi,
Thanks for all the previous guidance! I’ve been working on deploying a custom YOLOv8-seg model for tire detection on Raspberry Pi 5 with Hailo-8. After overcoming several challenges during the compilation phase, I successfully generated an HEF file using hailomz. However, I’m now stuck at the deployment stage with the instance_segmentation pipeline.
I wanted to provide a comprehensive overview of the entire process, including the issues I faced and how I resolved them, to help identify what might be going wrong with the Raspberry Pi deployment.
Background & Journey
**Goal:** Deploy a custom YOLOv8s-seg model trained on 4 tire classes (front_left_tire, front_right_tire, rear_left_tire, rear_right_tire) on Raspberry Pi 5 with Hailo-8 for real-time instance segmentation.
**Initial Attempt:** Following the standard DFC workflow with ClientRunner API as suggested in earlier discussions. This led to multiple challenges that I’ll detail below.
**Current Status:** HEF file successfully compiled with hailomz, but encountering “HEF version not supported” error when trying to run it with GStreamerInstanceSegmentationApp on Raspberry Pi 5.
Compilation Journey (Ubuntu Docker Environment)
Initial Setup
**HEF File Information:**
Successfully compiled using hailomz:
```
hailomz compile yolov8s_seg \
–ckpt best.onnx \
–hw-arch hailo8 \
–calib-path calib_images/parking_robot_tire_detection \
–classes 4
```
**Compilation Attempts:**
**Attempt 1: Direct DFC compilation (FAILED)**
Following your initial guidance, I tried compiling using ClientRunner:
```python
from hailo_sdk_client import ClientRunner, CalibrationDataType
runner = ClientRunner(har=‘best.har’)
runner.optimize(
calib_data=‘/local/shared_with_docker/yolo_conversion/calib_images/parking_robot_tire_detection’,
data_type=CalibrationDataType.image_dir
)
hef = runner.compile()
```
**First issue - CalibrationDataType error:**
```
AttributeError: image_dir
```
Tried with automatic detection:
```python
runner.optimize(‘/local/shared_with_docker/yolo_conversion/calib_images/parking_robot_tire_detection’)
```
**Second issue - Data type detection failed:**
```
[info] Found model with 3 input channels, using real RGB images for calibration
[info] Starting Model Optimization
…
ValueError: Couldn’t detect CalibrationDataType
```
Despite having 445 JPG images in the directory, the optimizer couldn’t detect the data type.
**Solution: Converted images to NPY format**
```python
import cv2
import numpy as np
from pathlib import Path
image_dir = Path(‘calib_images/parking_robot_tire_detection’)
output_dir = Path(‘calib_npy’)
output_dir.mkdir(exist_ok=True)
image_files = sorted(list(image_dir.glob(‘*.jpg’)))
for i, img_path in enumerate(image_files):
img = cv2.imread(str(img_path))
if img is not None:
img = cv2.resize(img, (640, 640))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32) / 255.0 # Normalize to 0-1
np.save(output_dir / f’calib_{i:04d}.npy’, img)
```
Then used NPY directory:
```python
runner.optimize(
calib_data=‘/local/shared_with_docker/yolo_conversion/calib_npy’,
data_type=CalibrationDataType.npy_dir
)
```
**Optimization succeeded** with warnings:
```
[warning] Reducing optimization level to 0 (the accuracy won’t be optimized and compression won’t be used) because there’s less data than the recommended amount (1024), and there’s no available GPU
[info] Using dataset with 64 entries for calibration
Calibration: 100%|█████████████| 64/64 [00:18<00:00, 3.38entries/s]
[info] Model Optimization is done
```
**Compilation FAILED** with concat18 error:
```
[error] Mapping Failed (allocation time: 6s)
No successful assignments: concat18 errors:
Agent infeasible
[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed
```
This is the same concat18 error we encountered initially when trying to parse the ONNX without proper end nodes.
**Attempt 2: hailomz compilation (SUCCESS)**
Switched to hailomz which succeeded without the concat18 error and generated an 18MB HEF file. This suggests hailomz handles YOLOv8-seg’s layer structure better than direct ClientRunner compilation.
**Key difference:** hailomz appears to have YOLOv8-seg specific optimizations that avoid the concat18 memory mapping issue.
Deployment Attempt (Raspberry Pi 5 Environment)
After successfully compiling the HEF with hailomz, I transferred it to Raspberry Pi 5 and attempted to run it with the existing instance_segmentation pipeline. This is where I’m currently stuck.
**HEF Parse Output:**
```
hailortcli parse-hef best.hef
Architecture HEF was compiled for: HAILO8
Network group name: yolov8s_seg, Multi Context - Number of contexts: 2
Network name: yolov8s_seg/yolov8s_seg
VStream infos:
Input yolov8s_seg/input_layer1 UINT8, NHWC(640x640x3)
Output yolov8s_seg/conv73 UINT8, NHWC(20x20x64)
Output yolov8s_seg/conv74 UINT8, NHWC(20x20x4)
Output yolov8s_seg/conv75 UINT8, NHWC(20x20x32)
Output yolov8s_seg/conv60 UINT8, FCR(40x40x64)
Output yolov8s_seg/conv61 UINT8, NHWC(40x40x4)
Output yolov8s_seg/conv62 UINT8, FCR(40x40x32)
Output yolov8s_seg/conv44 UINT8, FCR(80x80x64)
Output yolov8s_seg/conv45 UINT8, NHWC(80x80x4)
Output yolov8s_seg/conv46 UINT8, FCR(80x80x32)
Output yolov8s_seg/conv48 UINT8, FCR(160x160x32)
```
Configuration Files
**1. .env file:**
```bash
host_arch=arm
hailo_arch=hailo8
resources_path=resources
tappas_postproc_path=/usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes
model_zoo_version=v2.14.0
hailort_version=4.20.0-1
tappas_version=3.31.0
virtual_env_name=venv_hailo_rpi_examples
server_url=http://dev-public.hailo.ai/2025_01
tappas_variant=hailo-tappas-core
===== Custom HEF Configuration =====
HEF_PATH=/home/argoon/hailo-rpi5-examples/custom_model/best.hef
LABELS_JSON=/home/argoon/hailo-rpi5-examples/local_resources/yolov8s_seg_custom.json
NETWORK_WIDTH=640
NETWORK_HEIGHT=640
====================================
```
**2. yolov8s_seg_custom.json (created based on parse-hef output):**
```json
{
“iou_threshold”: 0.45,
“score_threshold”: 0.3,
“max_detections”: 300,
“image_dims”: [640, 640],
“regression_length”: 64,
“classes”: 4,
“labels”: [
“front_left_tire”,
“front_right_tire”,
“rear_left_tire”,
“rear_right_tire”
],
“anchors”: ,
“meta_arch”: “yolov8_seg”,
“output_format_type”: “HAILO_FORMAT_TYPE_UINT8”,
“outputs_name”: [
“yolov8s_seg/conv73”,
“yolov8s_seg/conv74”,
“yolov8s_seg/conv75”,
“yolov8s_seg/conv60”,
“yolov8s_seg/conv61”,
“yolov8s_seg/conv62”,
“yolov8s_seg/conv44”,
“yolov8s_seg/conv45”,
“yolov8s_seg/conv46”,
“yolov8s_seg/conv48”
],
“outputs_size”: [20, 40, 80],
“strides”: [32, 16, 8],
“mask_threshold”: 0.5
}
```
**3. Modified instance_segmentation_tire.py:**
```python
Key modifications from original instance_segmentation.py:
Changed detection filter from “person” to tire classes
TIRE_COLORS = {
“front_left_tire”: (255, 0, 0), # Red
“front_right_tire”: (0, 255, 0), # Green
“rear_left_tire”: (0, 0, 255), # Blue
“rear_right_tire”: (255, 255, 0), # Cyan
}
In app_callback function:
if label in TIRE_COLORS: # Changed from: if label == “person”
# … process tire detections
At the end of file:
if _name_ == “_main_”:
import sys
project_root = Path(\__file_\_).resolve().parent.parent
env_file = project_root / ".env"
env_path_str = str(env_file)
os.environ\["HAILO_ENV_FILE"\] = env_path_str
# Attempted to force config file loading
config_file = project_root / "local_resources" / "yolov8s_seg_custom.json"
if config_file.exists():
os.environ\["LABELS_JSON"\] = str(config_file)
print(f"Using config file: {config_file}")
else:
print(f"WARNING: Config file not found: {config_file}")
user_data = user_app_callback_class()
app = GStreamerInstanceSegmentationApp(app_callback, user_data)
app.run()
```
Error
When running:
```bash
cd ~/hailo-rpi5-examples
source setup_env.sh
python basic_pipelines/instance_segmentation_tire.py \
–hef-path custom_model/best.hef \
–input /dev/video0
```
**Output:**
```
Using config file: /home/argoon/hailo-rpi5-examples/local_resources/yolov8s_seg_custom.json
Loading environment variables from /home/argoon/hailo-rpi5-examples/.env…
All required environment variables loaded.
Auto-detected Hailo architecture: hailo8
Traceback (most recent call last):
File “/home/argoon/hailo-rpi5-examples/basic_pipelines/instance_segmentation_tire.py”, line 124, in
app = GStreamerInstanceSegmentationApp(app_callback, user_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/argoon/hailo-rpi5-examples/venv_hailo_rpi_examples/lib/python3.11/site-packages/hailo_apps/hailo_app_python/apps/instance_segmentation/instance_segmentation_pipeline.py”, line 60, in _init_
raise ValueError(“HEF version not supported; please provide a compatible segmentation HEF or config file.”)
ValueError: HEF version not supported; please provide a compatible segmentation HEF or config file.
```
Critical Discovery: Post-Processing Library Missing
After deploying to Raspberry Pi 5, I discovered the root cause:
**YOLOv8-seg post-processing library does not exist!**
```bash
ls /usr/local/hailo/resources/so/*seg*
/usr/local/hailo/resources/so/libyolov5seg_postprocess.so # Only YOLOv5!
```
The pipeline automatically uses:
```
so-path=/usr/local/hailo/resources/so/libyolov5seg_postprocess.so
config-path=/usr/local/hailo/resources/json/yolov5m_seg.json
```
**JSON Schema Incompatibility:**
When I tried using a YOLOv8-format JSON based on parse-hef outputs, I got:
```
Input JSON is invalid
Invalid schema: #
Invalid keyword: required
json config file doesn’t follow schema rules
```
**YOLOv5 vs YOLOv8 Structure:**
- YOLOv5: 4 outputs (3 detection + 1 proto), anchor-based, uses “input_shape”
- YOLOv8: 10 outputs (9 detection + 1 proto), anchor-free, uses “image_dims”
The YOLOv5 post-processing library cannot handle YOLOv8’s architecture.
Additional Attempts After Initial Question
**Attempt 1: Understanding the automatic HEF-JSON matching**
Initially tried various methods to pass config file:
- Setting `LABELS_JSON` in .env file
- Setting `os.environ[“LABELS_JSON”]` in Python script
- Using `–config-file` argument (not supported)
- Using `–labels-json` argument (not recognized)
All failed with “unrecognized arguments” errors.
**Then analyzed the source code to understand why:**
```python
Checked GStreamerInstanceSegmentationApp internals
python3 << ‘EOF’
import inspect
from hailo_apps.hailo_app_python.apps.instance_segmentation.instance_segmentation_pipeline import GStreamerInstanceSegmentationApp
init_source = inspect.getsource(GStreamerInstanceSegmentationApp._init_)
print(init_source)
EOF
```
**Critical discovery - automatic pattern matching:**
```python
The pipeline automatically matches HEF filename to JSON config!
hef_name = Path(self.hef_path).name
if INSTANCE_SEGMENTATION_MODEL_NAME_H8 in hef_name: # Checks if “yolov5m_seg” in filename
self.config_file = get_resource_path(…, “yolov5m_seg.json”) # Hardcoded JSON name!
elif INSTANCE_SEGMENTATION_MODEL_NAME_H8L in hef_name: # Checks if “yolov5n_seg” in filename
self.config_file = get_resource_path(…, “yolov5n_seg.json”)
else:
raise ValueError(“HEF version not supported…”)
```
**Key insight:** The config file is **NOT passed as argument** - it’s **automatically selected based on HEF filename pattern!**
Confirmed the pattern strings:
```python
INSTANCE_SEGMENTATION_MODEL_NAME_H8: “yolov5m_seg”
INSTANCE_SEGMENTATION_MODEL_NAME_H8L: “yolov5n_seg”
```
**This explained why:**
- `best.hef` → Error: “HEF version not supported” (no pattern match)
- `yolov5m_seg_custom.hef` → Success: matches “yolov5m_seg” pattern, loads `yolov5m_seg.json`
Based on this automatic matching behavior, I renamed:
```bash
mv best.hef yolov5m_seg_custom.hef
```
This successfully bypassed the “HEF version not supported” error and the pipeline started!
**Result:**
```
Using config file: /usr/local/hailo/resources/json/yolov5m_seg.json
v4l2src device=/dev/video0 … [pipeline created successfully]
```
**Attempt 2: Replacing JSON with YOLOv8-format config**
Created yolov5m_seg.json with YOLOv8 output structure:
```bash
sudo cp yolov5m_seg.json.backup yolov5m_seg.json.backup # Backed up original
sudo cp yolov8s_seg_custom.json /usr/local/hailo/resources/json/yolov5m_seg.json
```
**Result: FAILED with schema validation error**
```
Input JSON is invalid
Invalid schema: #
Invalid keyword: required
Invalid document: #
terminate called after throwing an instance of ‘std::runtime_error’
what(): json config file doesn’t follow schema rules
```
The YOLOv5 post-processing library (libyolov5seg_postprocess.so) expects YOLOv5-format JSON and cannot parse YOLOv8’s different structure.
**Confirmed: No YOLOv8-seg support**
```bash
ls /usr/local/hailo/resources/so/*seg*
/usr/local/hailo/resources/so/libyolov5seg_postprocess.so # Only this exists
```
Questions
-
**Why couldn’t ClientRunner detect JPG calibration images automatically?** The directory had 445 valid JPG files, but I got “Couldn’t detect CalibrationDataType” error. Is NPY format required, or is there a way to use JPG directly?
-
**Why does ClientRunner compilation fail with concat18 error while hailomz succeeds?** Is there a fundamental difference in how they handle YOLOv8-seg layer mapping?
-
**Is there a YOLOv8-seg post-processing library available?** Or is YOLOv8-seg simply not supported yet on Raspberry Pi 5 despite being compilable with hailomz?
-
**What’s the recommended approach for deploying YOLOv8-seg on Raspberry Pi 5?**
- Option A: Write custom post-processing in C++ following the guide?
- Option B: Use Python callback post-processing?
- Option C: Wait for official YOLOv8-seg library support?
Note: YOLOv8-seg is required for our use case - we cannot use Detection-only or older architectures.
-
**If custom post-processing is needed, can you provide guidance on:**
- How to parse the 10 output tensors from YOLOv8-seg HEF?
- How to decode the proto masks (conv48)?
- How to integrate it with GStreamerInstanceSegmentationApp?
-
**Is the hailomz-compiled YOLOv8-seg HEF fundamentally compatible with Raspberry Pi deployment?** Or does hailomz compilation create HEFs that require different runtime support than what’s available on RPi5?
-
**Can the existing libyolov5seg_postprocess.so be modified or extended to support YOLOv8-seg?** Or is a completely new post-processing library required due to the architectural differences?
-
**Please verify our analysis:** Did we correctly understand the pipeline’s behavior?
- Is it true that config file selection is purely based on HEF filename pattern matching?
- Is it correct that there’s no way to override the hardcoded “yolov5m_seg.json” selection?
- Did we miss any configuration options or environment variables that could help?
We want to make sure we didn’t overlook something obvious before pursuing custom post-processing development.
Environment
**Compilation Environment:**
- Platform: Ubuntu 22.04 with Docker (hailo_ai_sw_suite_2025-10-docker)
- Hailo DFC: 3.33.0
- Calibration images: 445 images (64 used for optimization)
**Deployment Environment:**
- Platform: Raspberry Pi 5 with Hailo-8
- HailoRT: 4.20.0-1
- Tappas: 3.31.0
- Model Zoo: v2.14.0
**Model Details:**
- Architecture: YOLOv8s-seg
- Classes: 4 custom classes (tire detection: front_left_tire, front_right_tire, rear_left_tire, rear_right_tire)
- Input size: 640x640x3
- HEF size: 18MB
- Compiled with: hailomz (after ClientRunner failed with concat18 error)
Any guidance on how to properly configure and run YOLOv8-seg on Raspberry Pi 5 would be greatly appreciated!
Thank you for your continued support!