Hi @The_Nguyen,
It’s great to know that you were able to run 3 cameras with 3 AI models using DeGirum PySDK. Here’s an optimized approach to reduce CPU usage while you scale to 5 cameras, or even more.
You can use dg_streams gizmos which is optimized for running multi-camera inference. You can find a detailed guide here: PySDKExamples/examples/dgstreams/multi_camera_multi_model_detection.ipynb at main · DeGirum/PySDKExamples
Below is a quick example to run gizmos and test CPU usage.
- Run the below script. Change the model names and video source as per your requirement:
import degirum as dg, degirum_tools
from degirum_tools import streams as dgstreams
import time
# === Configuration ===
video_sources = [
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/WalkingPeople.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/Parking.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/Traffic.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/Traffic2.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/TrafficHD.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/WalkingPeople2.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/example_video.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/faces_and_gender.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/person_face_hand.mp4",
"https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/person_pose.mp4",
]
model_name = "yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1"
model_zoo_url = "degirum/models_hailort"
hw_location = "@local"
token = ""
# === Load the model once ===
model = dg.load_model(
model_name=model_name,
inference_host_address=hw_location,
zoo_url=model_zoo_url,
token=token,
overlay_line_width=2
)
# === Create all pipeline components ===
sources = [dgstreams.VideoSourceGizmo(src, stop_composition_on_end=False) for src in video_sources]
resizers = [dgstreams.AiPreprocessGizmo(model) for _ in video_sources]
detectors = [dgstreams.AiSimpleGizmo(model) for _ in video_sources]
win_captions = [f"Stream #{i+1}" for i in range(len(video_sources))]
display = dgstreams.VideoDisplayGizmo(
win_captions, show_ai_overlay=True, show_fps=True, multiplex=True
)
# === Compose full pipeline ===
pipeline = [
source >> resizer >> detector >> display
for source, resizer, detector in zip(sources, resizers, detectors)
]
composition = dgstreams.Composition(*pipeline)
composition.start()
print("Composition started. Running for 100 inference steps...")
# === Counter-based exit ===
max_steps = 100
counter = 0
while counter < max_steps:
counter += 1
dgstreams.pump_all(1) # runs 1 inference step
# You could print or log here if needed
# === Stop ===
composition.stop()
print(f"✅ Stopped after {counter} steps.")
- Open another terminal and run this script to measure CPU usage while running above script:
cpu_samples = []
start_time = time.time()
while time.time() - start_time < duration:
usage = psutil.cpu_percent(interval=1)
cpu_samples.append(usage)
avg_cpu = round(sum(cpu_samples) / len(cpu_samples), 2)
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
file_exists = os.path.exists(output_file)
with open(output_file, mode="a", newline="") as f:
writer = csv.writer(f)
if not file_exists:
writer.writerow(["timestamp", "label", "avg_cpu"])
writer.writerow([timestamp, label, avg_cpu])
print(f"Average CPU usage using {label}: {avg_cpu}%")
The above script will measure CPU usage per second and provide avg_cpu usage at the end of the duration specified.