Hello, I started fiddling with the examples recently and I’m trying to add simple overlays to the video playback (like drawing a rectangle), however on the ‘detection.py’ example (and maybe the other ones) the default playback video is clearly not something coming out of cv2, as none of the cv2 text overlays are showing. I’m not sure exactly on what I should be changing. There’s a display_user_data_frame function from gstreamer_app.py which uses cv2.imshow, and the display pipeline defaults to ‘hailo_display’, which I suppose is just the video playback with the bounding boxes and labels. I get the feeling I simply have to change something in the pipeline to get cv2 to be the actual playback, but I’m not quite getting how to get it going since I’m simply not experienced with python/Gstreamer. I am assuming that cv2 from the example works fine if I simply use it as the playback? Help is greatly appreciated!
Alright, it took an embarrassing amount of time to finally get it but apparently all I had to do was add ‘–use-frame’ (e.g. python basic_pipelines/detection.py --use-frame) to add a playback display with the frames that are modified by cv2.
to prevent having two displays I changed this in hailo_apps_infra/gstreamer_app.py:
self.video_sink = “fakesink” # changed from ‘autovideosink’
So now when I run the script I have one display with cv2 drawing/overlaying things into the playback video (along with the bounding boxes), which is exactly what I was looking for to continue developing!
These posts were key for me to figure this out:
Imho this should be the default configuration, running rpi5 examples with cv2 calls doing nothing on the playback is confusing when starting off, even though the solution is fully integrated. Maybe I missed some documentation… Anyway, happy to have found the solution and hopefully this helps others!