I,m trying to test public model available on Hailo Model Zoo repo .
so i downloaded the compiled version(.hef) and ran with hailonet element but the post-processing is missing so the pipeline doesn’t overlay bounding box on video,hence the objective is to find the post-processing( .so) file so that can be included in hailofilter element or other to get the bounding boxes on video.
when i run this it works-hailo-rpi5-examples/doc/basic-pipelines.md at main · hailo-ai/hailo-rpi5-examples · GitHub
Welcome to the Hailo Community!
I noticed you’re looking for post-processing information. Could you please specify which model you’re working with? This would help us guide you better.
For your reference, you can find our post-processing implementations in two locations:
- The Model Zoo repository (in the core/postprocessing directory): hailo_model_zoo/hailo_model_zoo/core/postprocessing at dacfc333ffdb51f0a8bceb5330e5339f606248d7 · hailo-ai/hailo_model_zoo · GitHub
- The TAPPAS framework files
Let us know which model you’re interested in, and we’ll be happy to help further."
@omria Thanks for reply
My end goal is to run gender ,age ,emotion detection models in the same pipeline , but as of now i have downloaded
the pipeline that i’m using to run the gst-launch-1.0 filesrc location="resources/PeopleWalking.mp4" name=source ! queue name=source_queue_dec264 leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! qtdemux ! h264parse ! avdec_h264 max-threads=2 ! queue name=source_scale_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoscale name=source_videoscale n-threads=2 ! queue name=source_convert_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoconvert n-threads=3 name=source_convert qos=false ! video/x-raw, format=RGB, pixel-aspect-ratio=1/1 ! queue name=detection_scale_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoscale name=detection_videoscale n-threads=2 qos=false ! queue name=detection_convert_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert name=detection_videoconvert n-threads=2 ! queue name=detection_hailonet_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hailonet name=detection_hailonet hef-path=/home/pi/Documents/hailo-rpi5-examples/basic_pipelines/../resources/yolov6n.hef batch-size=2 nms-score-threshold=0.3 nms-iou-threshold=0.45 output-format-type=HAILO_FORMAT_TYPE_FLOAT32 force-writable=true ! queue name=detection_hailofilter_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hailofilter name=detection_hailofilter so-path=/usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes/libyolo_hailortpp_post.so qos=false ! queue name=identity_callback_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! identity name=identity_callback ! queue name=hailo_display_hailooverlay_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! hailooverlay name=hailo_display_hailooverlay ! queue name=hailo_display_videoconvert_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! videoconvert name=hailo_display_videoconvert n-threads=2 qos=false ! queue name=hailo_display_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 ! fpsdisplaysink name=hailo_display video-sink=xvimagesink sync=true text-overlay=false signal-fps-measurements=true
now this works fine now i have downloaded retinaface_mobilenet_v1.hef model and used the same postprocessing .so file but the overlaying is not happening on the video.
can you tell me the link that you shared they are python script how do i convert them into .so file and include in hailofilter element in the pipeline given above.
Q2. Will the same so file works for different models because the output of model will be the same format and so file will have same input .
Let me explain two approaches for handling post-processing with your models.
- Using Pre-Built TAPPAS Post-Processing Libraries: TAPPAS provides ready-made
.so
files for different models like YOLO and RetinaFace. Here’s how to use them:
- Check available
.so
files in TAPPAS:
ls /usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes/
- Update your pipeline with the appropriate post-processor:
hailofilter name=detection_hailofilter so-path=/usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes/libretinaface_post.so qos=false !
For multiple models (like gender and age detection), you’ll need multiple hailofilter elements with their respective post-processors.
- Creating Custom
.so
Files: If TAPPAS libraries don’t meet your needs, you can convert Python scripts to.so
files using Cython:
-
- install Cython:
pip install cython
- Create a Cython wrapper file (
post_process.pyx
):
# post_process.pyx
import numpy as np
cimport numpy as np
cdef public void process(float[:] detections):
# Replace this with your actual post-processing logic
cdef int i
for i in range(len(detections)):
print(detections[i])
cdef public void process_detections(float* detection_data, int num_detections):
# Example post-processing function
cdef np.ndarray[np.float32_t, ndim=2] detections = np.asarray(
<np.float32_t[:num_detections, :5]> detection_data
)
# Process each detection (example format: [x1, y1, x2, y2, score])
for i in range(num_detections):
x1, y1, x2, y2, score = detections[i]
if score > 0.5: # confidence threshold
print(f"Detection {i}: bbox=[{x1}, {y1}, {x2}, {y2}], score={score}")
- Create a setup file (
setup.py
):
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy as np
extensions = [
Extension(
"post_process",
["post_process.pyx"],
include_dirs=[np.get_include()],
extra_compile_args=["-fPIC"],
extra_link_args=["-shared"]
)
]
setup(
ext_modules=cythonize(extensions),
include_dirs=[np.get_include()]
)
- Build the shared object file:
python setup.py build_ext --inplace
-
Create a more complete post-processing implementation (
face_detection_post.pyx
). -
Update your GStreamer pipeline to use the custom
.so
:
gst-launch-1.0 \
filesrc location="video.mp4" ! \
decodebin ! \
videoconvert ! \
video/x-raw,format=RGB ! \
hailonet hef-path=retinaface_mobilenet_v1.hef ! \
hailofilter so-path=./post_process.cpython-39-aarch64-linux-gnu.so ! \
hailooverlay ! \
videoconvert ! \
autovideosink
Notes:
- Replace the post-processing logic in
process_face_detections()
with your specific requirements - Adjust the detection format based on your model’s output
- The exact filename of the
.so
will depend on your Python version and system architecture
Choose Option 1 if your models are supported by TAPPAS, and Option 2 if you need custom post-processing logic.
Let me know which approach you’d prefer to try, and I’ll help you with the implementation details!
As of now i’ll go with approach 1 then later shift with 2 but for 1 i have followed this link for running basic example on RPI5 Hailo so it has object detection pipeline and i got these pi@raspberrypi:~/Downloads $ ls /usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes/ cropping_algorithms libdepth_estimation.so libfacial_landmarks_post.so libocr_post.so libsemantic_segmentation.so libyolov5seg_post.so libcenterpose_post.so libface_attributes_post.so libmobilenet_ssd_post.so libperson_attributes_post.so libstream_id_tool.so libyolov8pose_post.so libclassification.so libface_detection_post.so libmspn_post.so libre_id.so libyolo_hailortpp_post.so post_processes_data libdebug.so libface_recognition_post.so libnanodet_post.so libscrfd_post.so libyolo_post.so
so i dont have this libretinaface_post.so from where do i get it to download as well as for other models for future like Age,gender,emotion and another question is the so file for object detection will not work for face detection model ?
Hey @avinash32.gahlowt,
It appears that libretinaface_post.so
isn’t available in the current TAPPAS library. After checking with R&D, we confirmed that TAPPAS includes only the .so
files you see, so you’ll need to create a custom one. Here’s a step-by-step guide to help you do that:
-
Adapt the Provided C++ Files: You can start by exploring the
cpp
folder in the Hailo examples repository, which contains foundational code for post-processing logic. Modify these C++ files to fit your custom post-processing requirements (for example, adapting bounding box adjustments or adding face landmark detection logic). -
Use
compile_postprocess.sh
to Build the.so
File: In the parent directory of the repository, you’ll find acompile_postprocess.sh
script. This script automates the setup and compilation using Meson and Ninja, turning your C++ files into a shared object (.so
) file. Here’s how to structure your project and use this script effectively:Project Structure
Organize your project files as follows:
project/ ├── src/ │ ├── remove_labels.cpp │ ├── remove_labels.hpp │ ├── yolo_hailortpp.cpp │ ├── yolo_hailortpp.hpp │ └── hailo_nms_decode.hpp ├── meson.build └── compile_postprocess.sh
Sample
meson.build
ConfigurationIn
meson.build
, specify how the files should be compiled:project('post_process', 'cpp') cpp_args = ['-O2', '-fPIC'] cpp_link_args = ['-shared'] # Define source files for the shared library src_files = [ 'src/remove_labels.cpp', 'src/yolo_hailortpp.cpp' ] # Build the shared library shared_library('post_process', src_files, include_directories: include_directories('src'), cpp_args: cpp_args, link_args: cpp_link_args, install: true )
Build and Compile
Run the
compile_postprocess.sh
script to generate your.so
file. Use the following command:./compile_postprocess.sh release # For release mode
After this step, the
.so
file (libpost_process.so
) will be in thebuild.release
directory. -
Integrate the
.so
File in Your GStreamer Pipeline
Finally, use the generated.so
file in thehailofilter
element of your GStreamer pipeline:gst-launch-1.0 filesrc location=video.mp4 ! decodebin ! videoconvert ! \ hailonet hef-path=model.hef ! hailofilter so-path=./build.release/libpost_process.so ! \ hailooverlay ! videoconvert ! autovideosink
This approach allows you to create a tailored .so
for post-processing, compatible with your GStreamer pipeline setup. Let me know if you need more detailed guidance on adapting the C++ logic or the compilation steps!