Running multiple models independently

User Guide: Running Multi-Stream Inference with DeGirum PySDK

DeGirum PySDK simplifies the integration of AI models into applications, enabling powerful inference workflows with minimal code. PySDK allows users to deploy models across cloud and edge devices with ease. For additional examples and hardware setup instructions, visit our Hailo Examples Repository.


Overview

This guide demonstrates how to run AI inference on multiple video streams simultaneously using DeGirum PySDK. The script employs multithreading to process video streams independently, supporting both video files and webcam feeds. Each stream uses a different AI model, showcasing the flexibility and scalability of PySDK.


Example: Multi-Stream Inference

In this example:

  1. Traffic Camera Stream: Detects objects in a video file using the YOLOv8 object detection model.
  2. Webcam Stream: Detects faces in a live webcam feed using a face detection model.

Both streams are processed concurrently, and results are displayed in separate windows.


Code Reference

import threading
import degirum as dg
import degirum_tools

# choose inference host address
inference_host_address = "@cloud"
# inference_host_address = "@local"

# choose zoo_url
zoo_url = "degirum/models_hailort"
# zoo_url = "../models"

# set token
token = degirum_tools.get_token()
# token = '' # leave empty for local inference

# Define the configurations for video file and webcam
configurations = [
    {
        "model_name": "yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1",
        "source": "../assets/Traffic.mp4",  # Video file
        "display_name": "Traffic Camera"
    },
    {
        "model_name": "yolov8n_relu6_face--640x640_quant_hailort_hailo8_1",
        "source": 1,  # Webcam index
        "display_name": "Webcam Feed"
    }
]

# Function to run inference on a video stream (video file or webcam)
def run_inference(model_name, source, inference_host_address, zoo_url, token, display_name):
    # Load AI model
    model = dg.load_model(
        model_name=model_name,
        inference_host_address=inference_host_address,
        zoo_url=zoo_url,
        token=token
    )

    with degirum_tools.Display(display_name) as output_display:
        for inference_result in degirum_tools.predict_stream(model, source):
            output_display.show(inference_result)
    print(f"Stream '{display_name}' has finished.")

# Create and start threads
threads = []
for config in configurations:
    thread = threading.Thread(
        target=run_inference,
        args=(
            config["model_name"],
            config["source"],
            inference_host_address,
            zoo_url,
            token,
            config["display_name"]
        )
    )
    threads.append(thread)
    thread.start()

# Wait for all threads to finish
for thread in threads:
    thread.join()

print("All streams have been processed.")

How It Works

  1. Model Configuration:

    • Define a list of configurations for each stream, specifying the model name, video source, and display name.
  2. Multithreading:

    • Each configuration is processed in its thread, allowing multiple streams to run concurrently.
  3. Inference Execution:

    • The run_inference function loads the specified model using dg.load_model and processes the video source using degirum_tools.predict_stream.
  4. Result Display:

    • Each stream’s output is displayed in a dedicated window using degirum_tools.Display.

Applications

  • Monitoring multiple cameras in real-time for security and surveillance.
  • Analyzing different video sources for smart infrastructure or retail applications.
  • Demonstrating the parallel processing capabilities of DeGirum PySDK on various hardware setups.

Additional Resources

For more detailed examples and instructions on deploying models with Hailo hardware, visit the Hailo Examples Repository. This repository includes tailored scripts for optimizing AI workloads on edge devices.

2 Likes

Hello @shashi, the program shows display with inference result only from video file, while it is supposed to show from web cam as well. If in configuration comment out model for video file, then it shows window with inference from web cam. Can you suggest what can be the reason ?

@kyurrii
This is due to some limitations of raspberry pi. We solved this issue here: [Using gstreamer with 2 different USB camera inputs and one .hef model - General - Hailo Community]. (Using gstreamer with 2 different USB camera inputs and one .hef model - #5 by shashi). Please take a look and see if it helps.

1 Like

Hi @shashi, yes it works, thanks !

1 Like

Does this tutorial work with 2 usb camera inputs using the same models?
I tried to follow this tutorial using 2 usb cameras and person recognition model. I’m expecting two output_displays each one showing what the corresponding camera detects. The issue: it only opens the last item in the configurations array in the output_display with a black screen.

Hi @AbnerDC
Did you follow the instructions in Using gstreamer with 2 different USB camera inputs and one .hef model - #5 by shashi?

Hello @shashi yes I did but I couldn’t make it fit to my code. I didn’t get how to process infer results

Hi @AbnerDC
If you can share your code snippet and any error messages you see, we can help.

I don’t clearly get how to run this infer function in two cameras at the same time

I have this

configs = [
        {
            "VIDEO_SOURCE": 0,
            "DEVICE_NAME": "CAMERA1"
        },
        {
            "VIDEO_SOURCE": 1,
            "DEVICE_NAME": "CAMERA2"
        }
    ]

person_detection_model_path = "yolo11n_silu_coco--640x640_quant_hailort_hailo8l_1"

person_detection_model = dg.load_model(
    model_name=person_detection_model_path,
    inference_host_address='@local',
    zoo_url=zoo_url,
    overlay_color=(0, 255, 0),
    output_class_set={"person"}
)

def infer(device_name, source):
  with degirum_tools.Display(device_name) as output_display:
      # for detected_persons in degirum_tools.predict_stream(person_detection_model, video_source, analyzers=[tracker, line_counter]):
      for detected_persons in degirum_tools.predict_stream(person_detection_model, source, analyzers=[tracker]):
  ---->>> HERE ALL THE LOGIC TO MANAGE DATA OF DETECTED PERSONS

But I need it to fit with the given example

Hi @AbnerDC
Please try this:

import degirum as dg, degirum_tools, cv2

# Camera sources: 0 and 1 represent connected webcam indices
sources = [0, 1]

# Hailo configuration
inference_host_address = "@local"  # Use IP if remote Hailo device
zoo_url = "degirum/hailo"
token = ""  # Leave empty if using public models

# Load the Hailo-compatible model using dg.load_model
model1 = dg.load_model(
    model_name="yolo11n_silu_coco--640x640_quant_hailort_hailo8l_1",
    zoo=zoo_url,
    token=token,
    device="HAILORT/HAILO8L",
    inference=inference_host_address
)

model2 = dg.load_model(
    model_name="yolo11n_silu_coco--640x640_quant_hailort_hailo8l_1",
    zoo=zoo_url,
    token=token,
    device="HAILORT/HAILO8L",
    inference=inference_host_address
)

# Display and inference loop
with degirum_tools.Display("Camera 0") as display1, degirum_tools.Display("Camera 1") as display2:
    for result1, result2 in zip(
        degirum_tools.predict_stream(model1, sources[0]),
        degirum_tools.predict_stream(model2, sources[1])
    ):
        display1.show(result1)
        display2.show(result2)
1 Like

I replicate It and it works. I appreciate your help

1 Like