How to get hailogallery results into python callback?
Hey @mattia.pnet
Please note that the following code is provided as an example and has not been tested. However, it should give you a general idea of how you can structure your script:
import hailo_platform
def run_inference():
runner = hailo_platform.InferenceRunner()
runner.load_model("your_model.hef")
# Assume input_data is prepared elsewhere
input_data = ... # Your input data here
results = runner.run(input_data)
return results
def result_callback(results):
print("Inference results:", results)
# Process results as needed
processed_results = results # Replace with actual processing
return processed_results
def main():
# Run inference
results = run_inference()
# Process results
processed_results = result_callback(results)
# Use processed results as needed
print("Processed results:", processed_results)
if __name__ == "__main__":
main()
Regards
I try to add more details.
I’m running face recognition, and it’s working 100%. Now I want get the person recognized by hailogallery in my python callback. How I can extract this information from hailo library?
This is my callback.
def app_callback(pad, info, user_data):
# Get the GstBuffer from the probe info
buffer = info.get_buffer()
# Check if the buffer is valid
if buffer is None:
return Gst.PadProbeReturn.OK
# Using the user_data to count the number of frames
user_data.increment()
string_to_print = f"Frame count: {user_data.get_count()}\n"
# Get the caps from the pad
format, width, height = get_caps_from_pad(pad)
# If the user_data.use_frame is set to True, we can get the video frame from the buffer
frame = None
if user_data.use_frame and format is not None and width is not None and height is not None:
# Get video frame
frame = get_numpy_from_buffer(buffer, format, width, height)
# Get the detections from the buffer
roi = hailo.get_roi_from_buffer(buffer)
detections = roi.get_objects_typed(hailo.HAILO_DETECTION)
....
....
Here’s an example of how you might implement the callback function for face recognition using the HailoGallery:
def app_callback(pad, info, user_data):
# Get the GstBuffer from the probe info
buffer = info.get_buffer()
if buffer is None:
return Gst.PadProbeReturn.OK
# Increment the frame count
user_data.increment()
# Extract video frame details (format, width, height)
format, width, height = get_caps_from_pad(pad)
# Get detections from the buffer
roi = hailo.get_roi_from_buffer(buffer)
detections = roi.get_objects_typed(hailo.HAILO_DETECTION)
# Loop through detections to find recognized faces
for detection in detections:
if detection.is_face_detection(): # Example method for checking face detection
# Access metadata for the recognized person via HailoGallery
recognized_person = detection.get_metadata('recognized_person') # Replace with the correct method from API
if recognized_person:
person_name = recognized_person.get_name() # Assuming the API has a method to fetch the person's name
print(f"Recognized Person: {person_name}")
return Gst.PadProbeReturn.OK
Note: This code has not been tested.
For more details on how to use the HailoGallery and work with metadata, please refer to the TAPPAS User Guide, available at the Hailo Developer Zone: TAPPAS User Guide (See pages 128-132).
Best regards
I’m sorry but seems that detection object does not have those methods:
AttributeError: 'hailo.HailoDetection' object has no attribute 'is_face_detection'
AttributeError: 'hailo.HailoDetection' object has no attribute 'get_metadata'
Hi @omria ,
any feedback on how to extract the person name from hailogallery into the python callback?
The method get_metadata()
is not an attribute of 'hailo.HailoDetection'
Object.
Please find attached my pipeline.
Hi @mattia.pnet I got face recognition working via bash script. Can you share how you achieved this with python? I think I can help you extract the person’s name. Cheers.
Hi @M_S , checkout my script inside basic_pipelines hailo_scripts/face_recognition.py at main · matzrm/hailo_scripts · GitHub
I tested it with
-input rpi
and you have to fix all the path variables inside my script.
Cool. Let me look into it tonight. How did you generate that image of your pipeline by the way?
This was a royal pain since I cannot find proper documentation from anything Hailo unfortunately. But as promised, here is the callback that does what you needed. I hope it is helpful.
I am sure it can be done more efficiently. Please share if you manage to improve it.
# This is the callback function that will be called when data is available from the pipeline
def app_callback(pad, info, user_data):
# Get the GstBuffer from the probe info
buffer = info.get_buffer()
if buffer is None:
return Gst.PadProbeReturn.OK
# Increment the frame count
user_data.increment()
# Extract video frame details (format, width, height)
format, width, height = get_caps_from_pad(pad)
# Get detections from the buffer
roi = hailo.get_roi_from_buffer(buffer)
detections = roi.get_objects_typed(hailo.HAILO_DETECTION)
# Loop through detections
for detection in detections:
print("\n--- Detected Object Information ---")
# Extract the state of the detection object
try:
detection_state = detection.__getstate__()
bbox_main, class_id, label, confidence, other_objects = detection_state
# Access the 'other_objects' which contains the classification, unique ID, landmarks, etc.
if isinstance(other_objects, tuple) and len(other_objects) > 0:
additional_objects = other_objects[2]
for obj in additional_objects:
# Check if the object is a HailoClassification to get the person's name
if isinstance(obj, hailo.HailoClassification):
try:
person_name = obj.get_label()
print(f"Recognized Person Name: {person_name}")
except Exception as e:
print(f"Error retrieving person name: {e}")
else:
print("No additional objects found in 'other_objects'.")
except Exception as e:
print(f"Error retrieving detection state: {e}")
return Gst.PadProbeReturn.OK
(post deleted by author)
Thank you @M_S, I don’t know how you achieved this because the lack of documentation.
To print the pipeline you can follow the official GitHub: