GStreamer vs run_inference_pipeline

Hey!

I am new here and I am trying to get a better understanding of the hailo_apps repo.
In particular my question is regarding the two types of project architectures in the python dir - namely the GStreamer approach used in the pipeline_apps and the seemingly more manual run_inference_pipeline used in the standalone_apps.

When developing use cases for ourselves is it recommended to use a pipeline_apps or standalone_apps approach?
I see that pipeline_apps has support for using rtsp as input source, is there a hack to get rtsp inputs working for standalone_apps?
If I want to use techniques from standalone_apps in pipeline_apps has there been any work done on this, such as using ByteTrack on pipeline_apps's detection?

Thanks you in advance!

Hi @user481 ,

  1. For first steps with Hailo we would recommend the Gstreamer pipeline apps.
  2. It has Tracker: hailo-apps/hailo_apps/python/core/gstreamer/gstreamer_helper_pipelines.py at main · hailo-ai/hailo-apps · GitHub
2 Likes

Hey @Michael

Thank you, this was helpful.
But I still want to ask on a more fundamental level from my origanal question:
”what is the difference between the two types of project architectures in the python dir - namely the GStreamer approach used in the pipeline_apps and the seemingly more manual run_inference_pipeline used in the standalone_apps"

Does the run_inference_pipelineapproach seen in standalone_apps not use GStreamer? If so what does it use for video decoding?

Hi @user481 ,

  1. Pipeline apps use Gstreamer and specifically custom Hailo’s elements - what’s called TAPPAS: tappas/docs/TAPPAS_architecture.rst at master · hailo-ai/tappas · GitHub. Those elements access HailoRT API (that interacts with the HW via a driver). In addition, there are C++ post processes and Python bindings to access the HailoRT API.

  2. The standalone apps use only C++ & Python bindings (No Gstreamer/TAPPAS) to perform all the interaction directly with HailoRT.

1 Like