I am using the C header files for the hailort library, and gstreamer for capturing webcam footage. I’m getting frames from gstreamer with an appsink, and a video format of x-raw.
What I’m wondering is how I can convert the gstreamer frame into the input format the the .hef file expect. I’m using the model zoo provided yolov7 .hef file, which takes an input of Input yolov7/input_layer1 UINT8, NHWC(640x640x3)
Hey @nicholas.young
Let me help you convert those frames from GStreamer to work with your YOLOv7 .hef file. Here’s how you can do it step by step:
First, you’ll need to grab the frame from your GStreamer sample. This is pretty straightforward:
GstBuffer *buffer = gst_sample_get_buffer(sample);
Then, to actually work with the frame data, map the buffer like this:
GstMapInfo map;
gst_buffer_map(buffer, &map, GST_MAP_READ);
Now for the main conversion part. Since you’re working with a YOLOv7 model, you’ll need the frames to be exactly 640x640 and in RGB format. OpenCV makes this really easy:
// Convert the raw data to an OpenCV matrix
cv::Mat frame(height, width, CV_8UC3, (void*)map.data);
// Resize it to match YOLOv7's requirements
cv::Mat resized_frame;
cv::resize(frame, resized_frame, cv::Size(640, 640));
// Convert to RGB if it's not already (GStreamer often outputs in BGR)
cv::cvtColor(resized_frame, resized_frame, cv::COLOR_BGR2RGB);
Once you’ve got the frame in the right format, you can copy it to your Hailo input buffer:
memcpy(hailo_input_buffer, resized_frame.data, 640 * 640 * 3);
Don’t forget to clean up by unmapping the buffer when you’re done:
gst_buffer_unmap(buffer, &map);
A couple of quick tips:
- Make sure to check if the buffer mapping succeeds
- The final format should be NHWC (that’s the layout YOLOv7 expects)
- Double-check that your pixel values stay in the UINT8 range (0-255)
Is there anything specific about the conversion process you’d like me to clarify?.
Best Regards
Thank you! I am getting useful outputs now!