how can we customise hailo rpi5 examples

hi, I’m trying to customize the detection cropper example (or equivalently detection would also be fine). The problem is that the wrappers refer to shared objects of which we don’t have the source code, so I have no idea how to modify their behavior. In particular (as also suggested in your guides) I’m trying to do face detection+age-gender analysis once a person has been detected

Hi @elbarto
We developed PySDK, a python package to simplify development of such pipelines. You can see an example of face detection followed by gender classification in the user guide we published: Face Detection + Gender Classification: Pipelining two models on Hailo devices

Thanks for the reply.
However, I’ve already tried this solution, but doesn’t work with rpi as input, as picamera2 and opencv libraries come into conflict.
The workaround was to set up a RTSP from picamera, but overloads CPU, and that’s why I’m here trying to build everything up without DeGirum (unless there will be a Rpi support soon!).

Hi @Mattia_Sospetti
Other users have been able to use PySDK successfully with picamera. Here is a sample code from another user who confirmed that this works:

import degirum as dg
import cv2
from picamera2 import Picamera2
import numpy as np

# Define a frame generator: a function that yields frames from the Picamera2
def frame_generator():
    picam2 = Picamera2()

    # Configure the camera (optional: set the resolution or other settings)
    picam2.configure(picam2.create_preview_configuration({'format': 'RGB888'}))

    # Start the camera
    picam2.start()

    try:
        while True:
            # Capture a frame as a numpy array
            frame = picam2.capture_array()


            # Yield the frame
            yield frame
    finally:
        picam2.stop()  # Stop the camera when the generator is closed

# Define model parameters (replace these with your own values)
face_det_model_name = "scrfd_500m--640x640_quant_hailort_hailo8l_1"
inference_host_address = "@local"
zoo_url = "/home/pi/degirum/scrfd_500m--640x640_quant_hailort_hailo8l_1"
token = ''
# Load the object detection AI model from the model zoo
model = dg.load_model(
    model_name=face_det_model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url,
    token=token,
    overlay_color=(0, 255, 0)  # Green color for bounding boxes
)
# Process the video stream by AI model using model.predict_batch():
for result in model.predict_batch(frame_generator()):
    # Display the frame with AI annotations in a window named 'AI Inference'
    cv2.imshow("AI Inference", result.image_overlay)

    # Process GUI events and break the loop if 'q' key was pressed
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break

# Destroy any remaining OpenCV windows after the loop finishes
cv2.destroyAllWindows()

@elbarto
Tagged the wrong user above.

https://forums.raspberrypi.com/viewtopic.php?t=372243

unfortunately there is a certain amount of users (among which I am) complaining that “imshow” method just doesn’t work if you import picamera2 library.

I was here trying to find another solution, I made an ideal pipeline string:

f’{source_pipeline} ! ’
f’{detection_pipeline_wrapper} ! ’ # body
f’{tracker_pipeline} ! ’
f’{cropper_pipeline}! ’
f’{detection_pipeline_wrapper} ! ’ # face
f’{cropper_pipeline} ! ’
f’ {inference_wrapper} !’ # age gender analysis
f’{user_callback_pipeline} ! ’ # python data postprocessing
f’{display_pipeline}’

I “just” have to figure out how to write all of this in C++ without many references…

@elbarto
Thanks for the clarification.

1 Like