How to use two cameras with Yolo modules for RPI5 with HAT+ Hailo?

Hello everyone in the community.

As mentioned in the title, my hardware setup consists of a Raspberry Pi 5 with 16GB RAM, an AI HAT+ board with Hailo (26 TOPS), running Pi OS 64-bit. I’ve been using the YOLO framework for a while, and I’m now starting to work with HAILO.

My challenge is finding a way to run an existing YOLO (version 8 or 11) script while leveraging my HAILO 8 for processing. I understand that the chip is only compatible with models in the “.hef” format. Additionally, I’m unable to download the Hailo Dataflow Compiler – Python package (whl) because I currently don’t have access to a machine that meets the required specifications.

Despite this, I know there are .hef models based on other yolo.pt models. Based on the example code below, could someone guide me on how to create a similar Python script, with the only difference being that the Hailo’s memory will be used to process the image analysis captured by my USB camera?

I currently don’t have any Raspberry Pi cameras available…

My example code:

from ultralytics import YOLO
import cv2
import math
import pygame
import threading

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320)  #3, 640		best160
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)  #4, 480	best120
#cap.set(cv2.CAP_PROP_FPS, 60) #


frame_count = 0 	
process_interval = 10 

# Model YOLO
model = YOLO("yolo11n.pt") #yolov5nu_ncnn_model


pygame.init()
pygame.mixer.init()
alarme_som = pygame.mixer.Sound("alarm.wav")
alarmeCtl = False

# Setup GPIO
out1 = 23
out2 = 24
#h = lgpio.gpiochip_open(0)
#lgpio.gpio_claim_output(h, out1)
#lgpio.gpio_claim_output(h, out2)

#def control_sig(on):
#    if on:
#        lgpio.gpio_write(h,out1,0)
#        lgpio.gpio_write(h,out2,1)
#       time.sleep(1)
#    else:
#        lgpio.gpio_write(h,out1,1)
#        lgpio.gpio_write(h,out2,0)

def alarme():
    global alarmeCtl
    for _ in range(1): 
        alarme_som.play()
        pygame.time.wait(250) # Time
    alarmeCtl = False


area = [200, 25, 320, 300]  
def check_overlap(box, area):
    x1, y1, x2, y2 = area    # Coord
    bx1, by1, bx2, by2 = box # CoordBounding Box

    return not (bx1 > x2 or bx2 < x1 or by1 > y2 or by2 < y1)

classNames = ["person"]

while True:
    success, img = cap.read()

    if not success:
        print("Error")
        break

    img_resized = cv2.resize(img, (640, 480)) 
    mask = img.copy() 


    cv2.rectangle(mask, (area[0], area[1]), (area[2], area[3]), (0, 255, 0), -1)

    if frame_count % process_interval == 0: 

        results = model(img_resized,classes=[0])
        detect_person = False 

        for r in results:
            boxes = r.boxes
            for box in boxes:
            # Bounding box
                x1, y1, x2, y2 = box.xyxy[0]
                x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)
                cls = int(box.cls[0])

                if cls == 0: 
                    cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 0), 2)

                    if check_overlap((x1, y1, x2, y2), area): 
                        cv2.rectangle(mask, (area[0], area[1]), (area[2], area[3]), (0, 0, 255), -1) 

                        if not alarmeCtl: 
                            alarmeCtl = True
                            threading.Thread(target=alarme).start()

                            print("Someone access!")

                        detect_person = True

     #   if not detect_person:
     #       control_sig(False)

    frame_count += 1

    imgFinal = cv2.addWeighted(mask, 0.5, img, 0.5, 0) 
    cv2.imshow('Webcam', imgFinal)
    if cv2.waitKey(1) == ord('q'): 
        lgpio.gpio_write(h,out1,0)
        lgpio.gpio_write(h,out2,0)
        break

cap.release()
cv2.destroyAllWindows()
pygame.quit()

1 Like

I’m sorry about the title, because have no link between the title and topic… Was a mistake, because i’m checking other topics to see if i can find any solution and i copy and paste the title. My bad… (i wish to edit or remove, but its not available)

For now, i find some info about the degirum lib, but i still having some difficult to use it… Anybody can help me?

Hi @Luiz_Mageste
Please let me know what issues you are having with degirum lib and we can help you. Are you unable to set it up properly? Or run the examples?

Hello @shashi . Thank you for your reply. Lets start by the basic… I know its possible to run USB cameras (info that i find at git “hailo-rpi5-examples”), but for some reason, i getting error on that:

(venv_hailo_rpi5_examples) rasp5@raspberrypi:~/hailo-rpi5-examples/basic_pipelines $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=640,height=480,pixelformat=MJPG
(venv_hailo_rpi5_examples) rasp5@raspberrypi:~/hailo-rpi5-examples/basic_pipelines $ python3 detection.py --input /dev/video0 --show-fps
Auto-detected Hailo architecture: hailo8
v4l2src device=/dev/video0 name=source ! image/jpeg, framerate=30/1, width=1280, height=720 ! queue name=source_queue_decode leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! decodebin name=source_decodebin ! videoflip name=videoflip video-direction=horiz !  queue name=source_scale_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! videoscale name=source_videoscale n-threads=2 ! queue name=source_convert_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! videoconvert n-threads=3 name=source_convert qos=false ! video/x-raw, pixel-aspect-ratio=1/1, format=RGB, width=1280, height=720  ! queue name=inference_wrapper_input_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! hailocropper name=inference_wrapper_crop so-path=/usr/lib/aarch64-linux-gnu/hailo/tappas/post_processes/cropping_algorithms/libwhole_buffer.so function-name=create_crops use-letterbox=true resize-method=inter-area internal-offset=true hailoaggregator name=inference_wrapper_agg inference_wrapper_crop. ! queue name=inference_wrapper_bypass_q leaky=no max-size-buffers=20 max-size-bytes=0 max-size-time=0  ! inference_wrapper_agg.sink_0 inference_wrapper_crop. ! queue name=inference_scale_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! videoscale name=inference_videoscale n-threads=2 qos=false ! queue name=inference_convert_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert name=inference_videoconvert n-threads=2 ! queue name=inference_hailonet_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! hailonet name=inference_hailonet hef-path=/home/rasp5/hailo-rpi5-examples/venv_hailo_rpi5_examples/lib/python3.11/site-packages/hailo_apps_infra/../resources/yolov8m.hef batch-size=2  vdevice-group-id=1 nms-score-threshold=0.3 nms-iou-threshold=0.45 output-format-type=HAILO_FORMAT_TYPE_FLOAT32 force-writable=true  ! queue name=inference_hailofilter_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! hailofilter name=inference_hailofilter so-path=/home/rasp5/hailo-rpi5-examples/venv_hailo_rpi5_examples/lib/python3.11/site-packages/hailo_apps_infra/../resources/libyolo_hailortpp_postprocess.so   function-name=filter_letterbox  qos=false ! queue name=inference_output_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0   ! inference_wrapper_agg.sink_1 inference_wrapper_agg. ! queue name=inference_wrapper_output_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0   ! hailotracker name=hailo_tracker class-id=1 kalman-dist-thr=0.8 iou-thr=0.9 init-iou-thr=0.7 keep-new-frames=2 keep-tracked-frames=15 keep-lost-frames=2 keep-past-metadata=False qos=False ! queue name=hailo_tracker_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0   ! queue name=identity_callback_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! identity name=identity_callback  ! queue name=hailo_display_overlay_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! hailooverlay name=hailo_display_overlay  ! queue name=hailo_display_videoconvert_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! videoconvert name=hailo_display_videoconvert n-threads=2 qos=false ! queue name=hailo_display_q leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0  ! fpsdisplaysink name=hailo_display video-sink=autovideosink sync=false text-overlay=True signal-fps-measurements=true 
Showing FPS
**Error: gst-stream-error-quark: Internal data stream error. (1), ../libs/gst/base/gstbasesrc.c(3132): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:source:**
**streaming stopped, reason not-negotiated (-4)**
Shutting down... Hit Ctrl-C again to force quit.
Exiting with error...

I’m using a 4K USB camera, reason that i change the resolution… Idk if its a problem because of that resolution or video format…

Hi @Luiz_Mageste
The code snippet you shared does not use degirum pysdk.

Yeah, as i said, this is a basic example from “hailo-rpi5-examples”. I have no idea why i cant access the USB. I also had tried the PySDK, but i realized the simple samples of python script dont work too, so i cant continue… I tried to follow the steps of this Guide (Hailo guide 3: Simplifying object detection on a Hailo device using DeGirum PySDK - Guides - DeGirum), but no success… I’m little lost about models, because it says to use the .hef extension, but the downloads there that i download was only .zip with onxx and json files. Thats correct? Anyway… @shashi , you have any suggestion of py that i can use to execute my USB camera using the Hailo8 process? It can be with or without the pysdk / degirum… i just dont get why my camera is not running correctly… i tried to use GPT. It tells me to change some specs in the pipeline code to run my usb camera properly, but i didnt find it (something like “v4l2src device=/dev/video0 name=source ! image/jpeg, framerate=30/1, width=640, height=480 !”… i have no idea what that means or where to find…)

In your Guide, you use a image of a cat… Can i try same, but using real time video capture with external camera? (looks like RaspCam works well… the problem i have is to use the USB cams)

Hi @Luiz_Mageste
Sure. You can see our video example in our quick start guide notebook: hailo_examples/examples/001_quick_start.ipynb at main · DeGirum/hailo_examples

Really thanks @shashi . I will take a look and back after with some feedback.

Hello @shashi . Again, thank you for your support. I make it run with no problem with a simple code…

import cv2
import degirum as dg
import degirum_tools

model = dg.load_model(
    model_name="yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1",
    inference_host_address="@local",
    zoo_url="degirum/hailo",
    token='',
    device_type=["HAILORT/HAILO8"]
)

cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print("Error to open USB cam.")
    exit()

print("Running code YOLOv8 + Hailo.")

while True:
    ret, frame = cap.read()
    if not ret:
        print("Frame error.")
        break

    result = model(frame)
    cv2.imshow("YOLOv8 Hailo - Webcam", result.image_overlay)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Now, i have two questions: about the line : model_name="yolov8n_relu6_coco--640x640_quant_hailort_hailo8_1",

How i can change it? I mean… I wish to use a custom dataset. The link i shared on your guide (hailo_model_zoo/docs/public_models/HAILO8/HAILO8_object_detection.rst at master · hailo-ai/hailo_model_zoo · GitHub) the download options come as a zip file, with a onxx inside only. How i can generate a .hef and .json files, and how/where i can patch this files?

Hi @Luiz_Mageste
Hailo model zoo not only provides zip folder with onnx files but also precompiled models in .hef format. Sometime in githun, you cannot see the extra columns but if you scroll to the right, you will see precompiled hef links also. We published guides on how to go from hef files to fully working models. Please see User Guide 1 Hailo World: Running Your First Inference on a Hailo Device Using DeGirum PySDK and follow up guides in the series.

1 Like

What a shame… :sweat_smile: I find it, thanks

Hello again @shashi and thanks for sharing some references for study. I’ve made good progress with the suggested content. I have a new question (and I hope it’s one of the last). I was having trouble adjusting the template usage for a directory on my machine, but I managed to adjust it. My current scenario is as follows:

I have a directory /home/rasp5/DeGirum_hailo_examples/models. Inside, I have a folder “yolov8n_relu6_coco–640x640_quant_hailort_hailo8_1” similar to the one shown in guide 1. In this same folder, I created a new one named “yolov11m” which contains the yolov11m.hef template, yolov11m.json and “labels_yolov8n_relu6_coco.json” (yes, I’m using the same labels file as in the example).

My yolov11m.json is as follows:

{
    “ConfigVersion": 10,
    “Checksum": ‘40a9f567771ad5a76a36c6383662cc477247b973a453d0ded0a0d5689b43b488’,
    “DEVICE": [
        {
            “DeviceType": ‘HAILO8’,
            “RuntimeAgent": ‘HAILORT’,
            “SupportedDeviceTypes": ”HAILO8”
        }
    ],
    “PRE_PROCESS": [
        {
            “InputN": 1,
            “InputH": 640,
            “InputW": 640,
            “InputC": 3,
            “InputQuantEn": true
        }
    ],
    “MODEL_PARAMETERS": [
        {
            “ModelPath": ”yolov11m.hef”
        }
    ],
    “POST_PROCESS": [
        {
            “OutputPostprocessType": ‘’,
            “OutputNumClasses": 80,
            “LabelsPath": ”labels_yolov8n_relu6_coco.json”
        }
    ]
}

My python code is:

import cv2
import degirum as dg
import degirum_tools


model = dg.load_model(
    model_name="yolov11m",  
    inference_host_address="@local",  
    zoo_url="/home/rasp5/DeGirum_hailo_examples/models",  
    token='',
    device_type=["HAILORT/HAILO8"] #only HAILO8?
)

cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print("Error to open USB CAM.")
    exit()

print("Starting YOLOv11m + Hailo (local). Press 'q' toe xit.")

while True:
    ret, frame = cap.read()
    if not ret:
        print(" Frame not found.")
        break


    result = model(frame)

    cv2.imshow("YOLOv11m Hailo", result.image_overlay)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

I think I’m having problems with the .json in the “POST_PROCESS” “OutputPostprocessType”: ‘’ part. I know that in the example model with yolov8n the field uses the description “DetectionYoloV8”. In this case, if I use DetectionYoloV8 or leave it empty(“”), I get an error, and if I use “None”, my code opens the window to transmit the video in real time, but without the post-processing of bounding boxes and object reckon…

When i exectute my py script (with OutputPostprocessType = "’ or “DetectionYoloV8”), my terminal returns:

(degirum_env) rasp5@raspberrypi:~ $ python3 run3.py 
Traceback (most recent call last):
  File "/home/rasp5/run3.py", line 6, in <module>
    model = dg.load_model(
            ^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/__init__.py", line 220, in load_model
    return zoo.load_model(model_name, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/log.py", line 59, in wrap
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/zoo_manager.py", line 266, in load_model
    model = self._zoo.load_model(model_name)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/log.py", line 59, in wrap
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.py", line 312, in load_model
    supported_device_types = self._model_supported_device_types(model_params)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.py", line 176, in _model_supported_device_types
    ret = [
          ^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.py", line 179, in <listcomp>
    if check_runtime_device_supported(
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rasp5/DeGirum_hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/_filter_models.py", line 51, in check_runtime_device_supported
    raise DegirumException(
degirum.exceptions.DegirumException: Invalid format of supported device list 'HAILO8' for model '/home/rasp5/DeGirum_hailo_examples/models/yolov11m/yolov11m.hef'

Do you have any suggestions as to what I can do to try to get my model to run? The OutputPostprocessType have some specific config to run yolo11 models?

I checked the support you give to other users in the following thread (User Guide 3: Simplifying Object Detection on a Hailo Device Using DeGirum PySDK - #48 by Loi_Tran). I’ve made some updates to my documents. I now have the following patch /home/rasp5/DeGirum_hailo_examples/models/yolov11m. Inside it, I have:

  • labels_coco.json (i changed the name, but is the same label as ’ labels_yolov8n_relu6_coco.json ')

  • yolov11m.hef

  • yolov11m.json :

{
    "ConfigVersion": 10,
    "Checksum": "69420",
    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8L"
        }
    ],
    "PRE_PROCESS": [
        {
            "InputN": 1,
            "InputH": 640,
            "InputW": 640,
            "InputC": 3,
            "InputQuantEn": true
            "InputPadMethod": "letterbox",
            "InputResizeMethod": "bilinear",                
            "InputQuantEn": true
        }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "yolov11m.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "OutputPostprocessType": "Detection",
            "PythonFile": "run3.py",
            "OutputNumClasses": 80,
            "LabelsPath": "labels_coco.json",
            "OutputConfThreshold": 0.3  
        }
    ]
}
  • run3.py :
import cv2
import degirum as dg
import degirum_tools


model = dg.load_model(
    model_name="yolov11m",  
    inference_host_address="@local",  
    zoo_url="/home/rasp5/DeGirum_hailo_examples/models",  
    token='',
    device_type=["HAILORT/HAILO8"] #only HAILO8?
)

cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print("Error to open USB CAM.")
    exit()

print("Starting YOLOv11m + Hailo (local). Press 'q' toe xit.")

while True:
    ret, frame = cap.read()
    if not ret:
        print(" Frame not found.")
        break


    result = model(frame)

    cv2.imshow("YOLOv11m Hailo", result.image_overlay)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Hi @Luiz_Mageste
where did you get the yolo11m.hef from?

“yolov11m” at , hailo_model_zoo/docs/public_models/HAILO8/HAILO8_object_detection.rst at master · hailo-ai/hailo_model_zoo · GitHub

here’s a screenshot of my current screen:

@shashi , new update… still with error:


I did a similar test with a direct download from DeGirum AI Hub, model “yolo11s_silu_coco–640x640_quant_hailort_hailo8_1”. The model worked perfectly. I then tried to base the same settings on my model above “yolov11m”, changing only the fields such as “model_name=” in the python script and “ModelPath” in the .json. as you can see in the images above, but that was not a sucess…

@Luiz_Mageste
Since the model is from hailo model zoo, you need a python postprocessor file. Please see User Guide 3: Simplifying Object Detection on a Hailo Device Using DeGirum PySDK