Need Help Capturing Unprocessed Frames in GStreamer-Hailo8l raspberry pi5

I’m integrating Hailo8 with a GStreamer pipeline for object detection and I’m encountering an issue. My goal is to capture raw, unprocessed video frames directly from the pipeline to use for dataset creation and labeling. However, I’m struggling to do this without affecting the performance of the object detection process.

Could anyone advise on how to modify the GStreamer pipeline to save these raw frames efficiently? I need the system to continue performing detection while also capturing these frames.

code:

def app_callback(pad, info, user_data):
buffer = info.get_buffer()
if buffer is None:
return Gst.PadProbeReturn.OK # Ensure to return Gst.PadProbeReturn.OK if the buffer is None

format, width, height = get_caps_from_pad(pad)
if format is not None and width is not None and height is not None:
    frame = get_numpy_from_buffer(buffer, format, width, height)
    if frame is not None:
        # Convert frame to BGR for saving as image
        frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        frame_path = f"raw_frame_{user_data.get_count()}.png"
        cv2.imwrite(frame_path, frame_bgr)
        user_data.increment()

# Always return a Gst.PadProbeReturn at the end of the callback
return Gst.PadProbeReturn.OK

Hey @nasirmohidin93,

To capture raw, unprocessed video frames from your GStreamer pipeline for dataset creation and labeling without affecting the performance of the object detection process, you can try the following approach (note that this has not been tested):

  1. Modify the GStreamer pipeline:
  • Add a tee element after the source but before any processing.
  • Create a separate branch from this tee for saving raw frames.
  1. Use queue elements to buffer frames and prevent blocking.
  2. Implement a separate thread or process for saving frames to avoid slowing down the main pipeline.

Here’s an example of how you could save a single frame:

def save_frames(self):
    while True:
        try:
            frame_number, frame = self.raw_frame_queue.get(timeout=1)
            frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
            frame_path = f"raw_frame_{frame_number}.png"
            cv2.imwrite(frame_path, frame_bgr)
        except queue.Empty:
            continue
        except Exception as e:
            print(f"Error saving frame: {e}")

This method runs in a separate process, continuously checking a queue for new frames to save. When a frame is available, it converts it to BGR color space and saves it as a PNG file.

To implement this, you’d need to:

  1. Set up a multiprocessing Queue in your main application.
  2. Modify your pipeline to include the new tee and branch.
  3. Add a probe to the new branch that puts frames into the Queue.
  4. Start a separate process that runs this save_frames method.

This approach should allow you to capture raw frames efficiently without significantly impacting the performance of your object detection process.

Best Regards