User Guide: Real-Time Object Detection on RTSP Streams with DeGirum PySDK
DeGirum PySDK enables seamless integration of AI models into real-time applications, including live video analysis from RTSP-enabled cameras. This guide walks you through processing and displaying AI inference results dynamically from an RTSP stream using the YOLOv8 object detection model.
Prerequisites
DeGirum PySDK: Installed and configured on your system. See DeGirum/hailo_examples for instructions.
RTSP Camera Stream: Obtain the RTSP URL of your camera. Replace username, password, ip, and port in the script with your camera’s credentials.
Token: If using cloud inference, ensure you have a valid token. For local inference, leave the token empty.
Script Overview
This script:
Loads the YOLOv8 object detection model.
Processes the RTSP video stream to detect objects in real time.
Displays the inference results dynamically in a dedicated window.
Code Example
import degirum as dg, degirum_tools
# Choose inference host address
inference_host_address = "@cloud"
# inference_host_address = "@local"
# Choose zoo_url
zoo_url = "degirum/models_hailort"
# zoo_url = "../models"
# Set token
token = degirum_tools.get_token()
# token = '' # Leave empty for local inference
# Specify the AI model and video source
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1"
video_source = "rtsp://username:password@ip:port/cam/realmonitor?channel=1&subtype=0" # Replace with your camera RTSP URL
# Load the AI model
model = dg.load_model(
model_name=model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token,
)
# Run AI inference on the video stream and display the results
with degirum_tools.Display("AI Camera") as output_display:
for inference_result in degirum_tools.predict_stream(model, video_source):
output_display.show(inference_result)
Steps to Run the Script
Set Up the RTSP Stream:
Replace the video_source string with the RTSP URL of your camera.
Example format: rtsp://username:password@ip:port/cam/realmonitor?channel=1&subtype=0.
Configure Inference:
Use @cloud for cloud inference or @local for local device inference.
Specify the appropriate zoo_url for accessing your model zoo.
Load the Model:
Replace model_name with your desired model if you want to detect objects other than the default YOLOv8 configuration.
Run the Script:
Execute the script to process the RTSP feed in real time.
The detected objects will be displayed dynamically in the window labeled “AI Camera.”
Stop the Display:
Press x or q to exit the display window.
Applications
Surveillance: Monitor live feeds for security and safety.
Traffic Analysis: Analyze vehicles and pedestrians in real time.
Industrial Monitoring: Detect objects in manufacturing or warehouse operations.
Additional Resources
For more examples and advanced use cases, visit our Hailo Examples Repository. This repository provides scripts and guidance for deploying AI models on various hardware configurations.
Feel free to post your questions or share your experiences with this setup in the comments below!
Hi,
Thx for sharing this DeGirum PySDK.
How can we detect just one category, for example “car”.
Can we use our own YOLOv8 model to work with DeGirum PySDK ?
There are two methods to detect just one category:
Filter the results as below:
import degirum as dg, degirum_tools
inference_host_address = "@local"
zoo_url = 'degirum/hailo'
token=''
device_type=['HAILORT/HAILO8L']
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1"
image_source = "<path to image>"
classes = {"car"}
# load model with set desired classes for output
model = dg.load_model(
model_name=model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token,
output_class_set = classes
)
# Run AI model on image
inference_result = model(image_source)
# print AI inference results
print(inference_result)
# AI prediction: show only desired classes
with degirum_tools.Display("All classes (press 'q' to exit)") as output_display:
output_display.show_image(inference_result)
Running inference with the following parameters:
Inference Host Address: @local
Model Zoo URL: degirum/hailo
Model Name: yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1
Image Source: https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/ThreePersons.jpg
Token: Loaded from environment
Extra Args: {}
Traceback (most recent call last):
File "/home/pi/degirum_env/bin/degirum_cli", line 8, in <module>
sys.exit(cli())
^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum_cli/cli.py", line 73, in predict_image
predict_image_main(
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum_cli/predict_image.py", line 37, in run_inference
model = dg.load_model(**model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum/__init__.py", line 220, in load_model
return zoo.load_model(model_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum/log.py", line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum/zoo_manager.py", line 266, in load_model
model = self._zoo.load_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum/log.py", line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/pi/degirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.py", line 636, in load_model
raise DegirumException(
degirum.exceptions.DegirumException: Cloud Model 'yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1' does not have any supported runtime/device combinations that will work on this system.
(degirum_env) pi@raspberrypi:~ $
I don’t know why.
Have you an idea which is the error ?
Hi @iulian_paul_NAIDIN
From the error message, it seems like you do not have a Hailo8 device. Can you please run degirum sys-info and let me know the output. Also, can you add --token 'abcd' in the command?
Model name changed to yolov8n_relu6_coco–640x640_quant_hailort_hailo8l_1 from yolov8n_relu6_coco–640x640_quant_hailort_hailo8l_1. Note the extra l in hailo8l
am working similar project where am analyzing traffic and am using rpi 5 8gb , and am having 3 rtsp stream so my cpu load may increase if i use yolo , if in case i want to use SSDmobile.hef how can i use it… am newbie in this , if am wrong some where please correct me. Thanks in advance