Possible to change model dynamically in Gstreamer

I have my Hailo App running aon a raspberry pi web server and streaming output to the client. I want to be able to switch models dynamically at runtime. For instance, the user presses ‘pose estimation’ button, then the pose estimation model is run, and if user presses ‘object detection’ then the object detection is run.
Does the Hailo process allow that the Gstreamer pipeline parameters (e.g. HEF path and post_processing_so) to be changed during runtime?

Hi @mgreiner79 Welcome to the community!
It is generally allowed, for example, we have pipelines that consist of more than a single network and switch dynamically upon need. But the models are known in advance. The ALPR (Automatic License Plate Recognition) is such a case.

Can you provide me a link to the code where this was implemented?
I have tried implementing it, and when I re-configure the gstreamer pipeline for the hailonet during runtime, I get this error:
“”“WARNING **: 12:25:55.239: The network was already configured so changing the HEF path will not take place!”“”

If I set the gstreamer element state to NULL before I change the settings, then I get an error
“”“INFO:root:Pipeline state changed from null to ready
Segmentation fault”“”

Any advice here what could be going wrong?

Sure:
tappas/apps/h8/gstreamer/general/license_plate_recognition at master · hailo-ai/tappas

Thank @Nadav
So I see the pipeline was all done by constructing a pipeline string.
It appears there are three different hailonets in the pipeline.
I’m trying to understand what are the key parameters to make this possible.

Here is on example from the code:
‘ hailonet hef-path=$LICENSE_PLATE_DETECTION_HEF vdevice-key=1 scheduling-algorithm=1 scheduler-threshold=5 scheduler-timeout-ms=100’

I am guessing the key here is the scheduling-algorithm, scheduler-threshold and scheduler-timeout-ms

Is there some further documentation on what parameters the hailo net takes?

I found some docs on hailort, demonstrating the multi_process_service, but what is shown in the license plate example appears to be different. In the case of multi_process_service, a service needs to be running in order for the communication to work.

How about in the license plate example, using scheduling? Is there a background service needed?

Any additional documentation would be greatly appreciated.

Hi @mgreiner79, your analysis is correct.
There are 3 hailonet executions. In general, or naive operation, each network would have liked to own a full device. in order to share the same device, we tell hailonet to use the scheduler, this is the vdevice-key, a groups of different hailo process can share the same device, if they use the same key. The rest are parameters for the scheduler operation. It is listed in the hailort docs (model scheduler). I also like using the GStreamer builtin help option gst-inspect-1.0 hailonet

Yes, there is a basic service that runs in the background, the details of it are also in the hailort docs (multi process service).

Ok, so i have it set-up to at least, not creash when starting.
Here is my pipeline string as a variable in python

"! capsfilter name=capsfilter caps=\"video/x-raw,format=RGB,width=640,height=480,framerate=30/1\" "
"! queue name=queue_source "
"! tee name=t "

"t. ! queue name=stream_branch_queue "
"! videoconvert name=stream_videoconvert "
"! capsfilter name=stream_capsfilter caps=\"video/x-raw,format=RGB\" "
"! identity name=stream_identity "
"! input-selector name=selector "
"! appsink name=appsink emit-signals=true sync=false drop=true max-buffers=1 "

"t. ! queue name=pose_estimation_scale_q "
"! videoscale name=pose_estimation_videoscale n-threads=2 qos=false "
"! queue name=pose_estimation_convert_q "
"! capsfilter name=pose_estimation_caps caps=\"video/x-raw,pixel-aspect-ratio=1/1\" "
"! videoconvert name=pose_estimation_videoconvert n-threads=2 "
"! queue name=pose_estimation_hailonet_q "
f"! hailonet name=pose_estimation_hailonet hef-path=\"{hef_path.resolve()}\" batch-size=2 force-writable=true multi-process-service=true vdevice-group-id={group_id} scheduling-algorithm=1 "
"! queue name=pose_estimation_hailofilter_q "
f"! hailofilter name=pose_estimation_hailofilter so-path=\"{post_process_so.resolve()}\" qos=false "
"! selector.sink_1 "

"t. ! queue name=detection_scale_q "
"! videoscale name=detection_videoscale n-threads=2 qos=false "
"! queue name=detection_convert_q "
"! capsfilter name=detection_caps caps=\"video/x-raw,pixel-aspect-ratio=1/1\" "
"! videoconvert name=detection_videoconvert n-threads=2 "
"! queue name=detection_hailonet_q "
f"! hailonet name=detection_hailonet hef-path=\"{hef_path1.resolve()}\" batch-size=2 force-writable=true {add_param} multi-process-service=true vdevice-group-id={group_id} scheduling-algorithm=1 "
"! queue name=detection_hailofilter_q "
f"! hailofilter name=detection_hailofilter so-path=\"{post_process_so1.resolve()}\" qos=false "
"! selector.sink_2 "
)```

It starts, and the raw_stream works, but when trying the pose estimation branch, the video signal freeze on the first frame.

Any ideas?

When I show debug info from my input selector, i see this message when trying to switch streams
“”“input-selector gstinputselector.c:882:gst_input_selector_wait_running_time:selector:sink_1 Waiting for active streams to advance. 0:00:11.998474083 >= 0:00:11.998474083"”"

So I can manage to get it running for a little while with multiple streams if i change some parameters like the buffer size for the queues. but it is not consistent. I can get it to run for a minute or so sometimes, then other times it will freeze after one frame. The behavior is very inconsistent and hard to track down.
@Nadav , are there any parameters I should know of for the hailort multiprocessing service?

I also noticed these in the logs
“”“[2024-12-16 08:36:33.732] [4438] [HailoRT] [info] [hef.cpp:1847] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: yolov8s
[2024-12-16 08:36:33.732] [4438] [HailoRT] [info] [hef.cpp:1847] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: yolov8s
“””

Is it necessary to set the network_group names?

After several sleepless nights, I finally identified the problem. The problem was that my input-selector was trying to synchronize the frames, and wasnt able to, preumably due to lack of a time stamp.
So if I just tell it to not synchronize, it works
Here’s the pipeline

from pathlib import Path
current_path = os.path.dirname(os.path.abspath(__file__))
# Define a common group_id
group_id = "1"
# Optionally set environment variables
os.environ["HAILORT_GROUP_ID"] = group_id
os.environ["HAILORT_MULTI_PROCESS_SERVICE"] = "1"
os.environ["HAILORT_LOGGER_PATH"] = f"{Path(current_path).resolve()}/hailo_log.log"
os.environ["HAILORT_CONSOLE_LOGGER_LEVEL"] = "info"
# os.environ["GST_DEBUG"] = "input-selector:5,hailonet:5,hailofilter:5"


config = {
    "hef_path": "../resources/yolov8s_pose_h8l.hef",
    "post_process_so": "../resources/libyolov8pose_postprocess.so",
    "post_process_function": "filter"
}


hef_path = Path(os.path.join(current_path, config.get("hef_path", "")))
post_process_so = Path(os.path.join(current_path, config.get("post_process_so", "")))
post_process_function = Path(config.get("post_process_function",""))


nms_score_threshold = 0.3
nms_iou_threshold = 0.45

thresholds = {
    "nms-score-threshold" : nms_score_threshold,
    "nms-iou-threshold": nms_iou_threshold,
    "output-format-type": "HAILO_FORMAT_TYPE_FLOAT32"
}
        

object_detection_config = {
    "hef_path": "../resources/yolov8s_h8l.hef",
    "post_process_so": "../resources/libyolo_hailortpp_postprocess.so",
    "post_process_function": "filter",
    "additional_params": thresholds
}

hef_path1 = Path(os.path.join(current_path, object_detection_config.get("hef_path", "")))
post_process_so1 = Path(os.path.join(current_path, object_detection_config.get("post_process_so", "")))
post_process_function1 = Path(object_detection_config.get("post_process_function",""))

add_param = " ".join([f"{k}={v}" for k,v in thresholds.items()])


pipeline_str = (
"libcamerasrc name=source "
"! capsfilter name=capsfilter caps=\"video/x-raw,format=RGB,width=640,height=480,framerate=30/1\" "
"! queue name=queue_source max-size-buffers=4 leaky=upstream "
"! tee name=t "

"t. ! queue name=stream_branch_queue max-size-buffers=4 leaky=upstream "
"! videoconvert name=stream_videoconvert "
"! capsfilter name=stream_capsfilter caps=\"video/x-raw,format=RGB\" "
"! identity name=stream_identity "
"! input-selector name=selector sync-streams=false "
"! appsink name=appsink emit-signals=true sync=false drop=true max-buffers=1 "

"t. ! queue name=pose_estimation_scale_q max-size-buffers=4 leaky=upstream "
"! videoscale name=pose_estimation_videoscale n-threads=2 qos=false "
"! queue name=pose_estimation_convert_q max-size-buffers=4 leaky=upstream "
"! capsfilter name=pose_estimation_caps caps=\"video/x-raw,pixel-aspect-ratio=1/1\" "
"! videoconvert name=pose_estimation_videoconvert n-threads=2 "
"! queue name=pose_estimation_hailonet_q max-size-buffers=4 leaky=upstream "
f"! hailonet name=pose_estimation_hailonet hef-path=\"{hef_path.resolve()}\" batch-size=2 force-writable=true multi-process-service=true vdevice-group-id={group_id} scheduling-algorithm=1 scheduler-threshold=2 "
"! queue name=pose_estimation_hailofilter_q max-size-buffers=4 leaky=upstream "
f"! hailofilter name=pose_estimation_hailofilter so-path=\"{post_process_so.resolve()}\" qos=false "
"! selector.sink_1 "

"t. ! queue name=detection_scale_q max-size-buffers=4 leaky=upstream "
"! videoscale name=detection_videoscale n-threads=2 qos=false "
"! queue name=detection_convert_q max-size-buffers=4 leaky=upstream "
"! capsfilter name=detection_caps caps=\"video/x-raw,pixel-aspect-ratio=1/1\" "
"! videoconvert name=detection_videoconvert n-threads=2 "
"! queue name=detection_hailonet_q max-size-buffers=4 leaky=upstream "
f"! hailonet name=detection_hailonet hef-path=\"{hef_path1.resolve()}\" batch-size=2 force-writable=true {add_param} multi-process-service=true vdevice-group-id={group_id} scheduling-algorithm=1 scheduler-threshold=2 "
"! queue name=detection_hailofilter_q max-size-buffers=4 leaky=upstream "
f"! hailofilter name=detection_hailofilter so-path=\"{post_process_so1.resolve()}\" qos=false "
"! selector.sink_2 "
)```

Now it's time to clean up the code and make it modular.