GStreamer parallel inference

Hi all,

running parallel inference in GStreamer pipelines was quite easy in the past:
I was able to run this pipeline multiple times on the same device

gst-launch-1.0 -e videotestsrc ! hailonet hef-path=model/yolov7_tiny_anymos.hef is-active=true device-count=0  ! fakesink

Now (Vers . 4.17.1) the second pipeline i start runs into following error:

[HailoRT] [error] CHECK failed - Failed to create vdevice. there are not enough free devices. requested: 1, found: 0
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_OUT_OF_PHYSICAL_DEVICES(74)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_OUT_OF_PHYSICAL_DEVICES(74)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_OUT_OF_PHYSICAL_DEVICES(74)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_OUT_OF_PHYSICAL_DEVICES(74)
CHECK_EXPECTED_AS_STATUS failed with status=74

I did find this post How do I run multiple gstreamer pipeline in parallel? is it legit? Is hailort.service mandatory now to run inference in parallel?

We have had the Multi-Process Service since HailoRT v4.10.0 . I do not think the behavior has changed. Some software/service needs to manage the access to the hardware when you have multiple processes.
If you have multiple networks in a single process the model scheduler can handle that without the service.

Have a look at the HailoRT documentation available in the Developer Zone. In the online version you can easily compare the versions. Check the Running Inference section for description of all possibilities.

Hailo Developer Zone HailoRT v4.17.0