Custom TAPPAS Python postprocessing, empty tensors

Hello - I am following the TAPPAS “write your own python post process” guide to write my own python postprocess. I currently am using a custom YOLOv8s model that I converted to HEF using the model zoo on an AWS EC2 instance.

The HEF of the model works great with the default libyolo_hailortpp_post.so set as the postprocess using hailofilter. The default postprocess puts the bounding boxes at the correct positions, however the label “person” is applied to the object even though it is not a person. I presume this has to do with the postprocess and some kind of default label being applied to the class ID so I want to make my own postprocess.

But when I switch to using hailopython and try to access the video_frame.roi - the tensor “yolov8s/yolov8_nms_postprocess” appears to be empty

Attributes and methods of video_frame:
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_buffer', '_roi', '_video_info', '_video_info_from_caps', 'buffer', 'map_buffer', 'numpy_array_from_buffer', 'roi', 'video_info']
Attributes and methods of ROI:
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', 'add_object', 'add_tensor', 'clear_tensors', 'get_bbox', 'get_objects', 'get_objects_typed', 'get_scaling_bbox', 'get_stream_id', 'get_tensor', 'get_tensors', 'get_type', 'has_tensors', 'remove_object', 'set_bbox', 'set_scaling_bbox', 'set_stream_id']
Number of tensors: 1
Tensor name: yolov8s/yolov8_nms_postprocess
Tensor shape: (1, 100, 0)
Tensor content: []
Tensor type: <class 'numpy.ndarray'>
Tensor size: 0
Tensor methods and properties:
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'data', 'features', 'fix_scale', 'get', 'get_full_percision', 'height', 'name', 'shape', 'size', 'vstream_info', 'width']
Detailed tensor data:
[]

My current code set up for debugging this issue

import hailo
from gsthailo import VideoFrame
from gi.repository import Gst
import numpy as np

def run(video_frame: VideoFrame):
    try:
        # Print attributes and methods of video_frame
        print("Attributes and methods of video_frame:")
        print(dir(video_frame))

        # Print ROI attributes and methods
        roi = video_frame.roi
        print("Attributes and methods of ROI:")
        print(dir(roi))

        # Print detailed tensor information
        tensors = roi.get_tensors()
        print(f"Number of tensors: {len(tensors)}")
        for tensor in tensors:
            tensor_name = tensor.name()
            tensor_shape = (tensor.height(), tensor.width(), tensor.features())
            tensor_array = np.array(tensor, copy=False)
            print(f"Tensor name: {tensor_name}")
            print(f"Tensor shape: {tensor_shape}")
            print(f"Tensor content: {tensor_array}")
            print(f"Tensor type: {type(tensor_array)}")
            print(f"Tensor size: {tensor_array.size}")

            # Print methods and properties of tensor
            print("Tensor methods and properties:")
            print(dir(tensor))

            # Print tensor data in detail
            print("Detailed tensor data:")
            print(tensor_array)

            # If tensor contains data, process it
            if tensor_array.size > 0:
                for detection in tensor_array:
                    if len(detection) < 6:
                        print(f"Invalid detection data: {detection}")
                        continue

                    xmin, ymin, xmax, ymax, confidence, class_id = detection[:6]

                    # Print the detection details for debugging
                    print(f"Detection: xmin={xmin}, ymin={ymin}, xmax={xmax}, ymax={ymax}, confidence={confidence}, class_id={class_id}")

                    # Create a bounding box
                    bbox = hailo.HailoBBox(xmin=float(xmin), ymin=float(ymin), width=float(xmax - xmin), height=float(ymax - ymin))
                    
                    # Create a detection object
                    label = "my_custom_label" 
                    detection_obj = hailo.HailoDetection(bbox=bbox, label=label, confidence=float(confidence))
                    
                    # Add detection to video frame
                    video_frame.roi.add_object(detection_obj)
    
    except Exception as e:
        print(f"Error during inspection: {e}")

    # Exit gracefully
    return Gst.FlowReturn.OK

Any insight into what I am doing incorrectly would be greatly appreciated!

For some additional context, here is my pipeline construction

    def get_pipeline_string(self):
        source_element = f"libcamerasrc name=src_0 auto-focus-mode=2 ! "
        source_element += f"video/x-raw, format=RGB, width=1536, height=864 ! "
        source_element += QUEUE("queue_src_scale")
        source_element += f"videoscale ! "
        source_element += f"video/x-raw, format=RGB, width=640, height=640, framerate=30/1 ! "

        pipeline_string = "hailomuxer name=hmux "
        pipeline_string += source_element
        pipeline_string += "tee name=t ! "
        pipeline_string += QUEUE("bypass_queue", max_size_buffers=20) + "hmux.sink_0 "
        pipeline_string += "t. ! " + QUEUE("queue_hailonet")
        pipeline_string += "videoconvert n-threads=3 ! "
        pipeline_string += f"hailonet hef-path={self.hef_path} batch-size={self.batch_size} {self.thresholds_str} force-writable=true ! "
        # pipeline_string += QUEUE("queue_hailofilter")
        # pipeline_string += f"hailofilter so-path={self.default_postprocess_so} qos=false ! "
        pipeline_string += f"hailopython module={self.custom_postprocess_module} qos=false ! "
        pipeline_string += QUEUE("queue_hmuc") + " hmux.sink_1 "
        pipeline_string += "hmux. ! " + QUEUE("queue_hailo_python")
        pipeline_string += QUEUE("queue_user_callback")
        pipeline_string += f"identity name=identity_callback ! "
        pipeline_string += QUEUE("queue_hailooverlay")
        pipeline_string += f"hailooverlay ! "
        pipeline_string += QUEUE("queue_videoconvert")
        pipeline_string += f"videoconvert n-threads=3 qos=false ! "
        pipeline_string += QUEUE("queue_hailo_display")
        pipeline_string += f"fpsdisplaysink video-sink=autovideosink name=hailo_display sync=false text-overlay=False signal-fps-measurements=true "
        print(pipeline_string)
        return pipeline_string

Okay just for a little more context

Platform: Raspberry Pi 5
Accelerator: Hailo8L

YOLOv8s model, converted to ONNX and then converted to HEF using the HMZ.

I understand now that the Python API is not yet fully compatible with Raspberry Pi, so perhaps this is why I am experiencing this behavior?

Sorry for the shameless bump - but just trying to get some clarity. Very excited about the Hailo8L and the RPi AI Kit!

Hey @nick1 ,

Welcome to the Hailo community!

I’m glad to hear you’re excited to use the Hailo AI kit.

The issue you’re experiencing seems to be related to the Python API. I will look into this and provide an update.

Just keep in mind that the Python API will be available in the next release of the Hailo packages for Raspberry Pi OS.

Please let me know if you have any other questions or if there’s anything else I can assist you with.

Thank you for the reply

Just so I understand it, the release of the Python API for Raspberry Pi is quite tentative, yes? I am hoping to utilize it fairly directly, as I’ve got several of the AI kits now for my projects

Just coming back to say that I took the time to dig deeper into yolo_hailortpp.cpp and it started to make a lot more sense. Now im running my own C++ function as the post process using function-name on hailofilter in the Gstreamer pipeline setup.

Awesome!