We are implementing the Hailo Raspberry Pi 5 template and using detection.py with our own .hef model. Everything works fine, but we need to process two separate USB camera inputs using GStreamer.
We found references to multi-stream processing but haven’t been able to implement it successfully.
Is there a straightforward way to modify detection.py or other files in the repo to handle two camera inputs?
Can we identify which source each bounding box belongs to as they will be joined into one video stream?
we are already using Multi-Stream Inference with DeGirum PySDK. Unfortunately when trying to use it with two inputs (videofile or webcam) it does not start properly and very slow. It only shows one window, which is completely black. It crashes after a while as well. When using the threading with one of the inputs, it works fine at around 20-30FPS. Additionally this error appears, we are not sure if this is important. There are not other errors printed…
qt.qpa.plugin: Could not find the Qt platform plugin “wayland” in “/home/teamalpha/Dokumente/hailo_examples/degirum_env/lib/python3.9/site-packages/cv2/qt/plugins”
We are working on an RaspberryPi5. Any help would be appreciated.
Hi @Paul_Siewert
We are able to replicate the issue you reported. It is related to display being used by two threads and seems to be an issue on raspberry pi systems (we are checking if it is only raspberry pi or it is a general linux issue, as the code is working properly on windows systems). Internally, we have a method to make it work but we need to prepare a user guide to explain how it works. We will keep you posted.
Hi @Paul_Siewert
This was not a very easy issue to solve but here is our attempt We tested it on Raspberry Pi and two sources (video file + webcam, 2 webcams) and it works on our side, though there might be a few kinks that need to be ironed out. Please give it a try and let us know your feedback. We will work in the background to further improve it. Due to the complexity of the problem, it requires our degirum_tools package as well. Please make sure you have the latest version: 0.16.4
import degirum as dg
import degirum_tools
import degirum_tools.streams as dgstreams
inference_host_address = "@cloud"
# inference_host_address = "@local"
# choose zoo_url
zoo_url = "degirum/models_hailort"
# zoo_url = "../models"
# set token
token = degirum_tools.get_token()
# token = '' # leave empty for local inference
# Define the configurations for video file and webcam
configurations = [
{
"model_name": "yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1",
"source": "../assets/Traffic.mp4", # Video file
"display_name": "Traffic Camera",
},
{
"model_name": "yolov8n_relu6_face--640x640_quant_hailort_hailo8_1",
"source": 1, # Webcam index
"display_name": "Webcam Feed",
},
]
# load models
models = [
dg.load_model(cfg["model_name"], inference_host_address, zoo_url, token)
for cfg in configurations
]
# define gizmos
sources = [dgstreams.VideoSourceGizmo(cfg["source"]) for cfg in configurations]
detectors = [dgstreams.AiSimpleGizmo(model) for model in models]
display = dgstreams.VideoDisplayGizmo(
[cfg["display_name"] for cfg in configurations], show_ai_overlay=True, show_fps=True
)
# create pipeline
pipeline = (
(source >> detector for source, detector in zip(sources, detectors)),
(detector >> display[di] for di, detector in enumerate(detectors)),
)
# start composition
dgstreams.Composition(*pipeline).start()
We will publish a user guide on what these functions mean and do (but it will take a while as we have more basic guides to finish before coming to this advanced topic).
thank you very much for your fast reply and your detailed help. We already found a workaround, as we just run the code headless and without showing the webcam feed as for our project we only need the bounding boxes. I will attach you the workaround at the end of the mail if you are interested. However we are still experiencing issues with using our own .hef model with it. This is the error that gets printed and i will attach you the json file:
Error loading model sunflower: Failed to perform model ‘sunflower’ inference: [ERROR]Execution failed
Condition ‘input_tensor->shape()[ 1 ] == 4 + m_OutputNumClasses’ is not met: input_tensor->shape()[ 1 ] is 1, 4 + m_OutputNumClasses is 5
dg_postprocess_detection.cpp: 1307 [DG::DetectionPostprocessYoloV8::inputDataProcessBaseline]
When running model ‘sunflower’
Hi @Paul_Siewert
Is it an object detection model based on yolo that you compiled to hef using the settings from the .alls file? Please check what is the output tensor from the model? If it already has Hailo’s NMS postprocessor, you should not apply our postprocessor again. Please see this guide: User Guide 3: Simplifying Object Detection on a Hailo Device Using DeGirum PySDK