Hi,
I’d like to know if there’s a way to completely eliminate the use of videoconvert
in my pipeline by offloading the color conversion (NV12 to RGB) onto the Hailo device itself.
The model is matching the camera’s native resolution. The current working pipeline includes videoconvert
:
/usr/bin/gst-launch-1.0 -e rtspsrc location=<loc> user-id=<id> user-pw=<pw> is-live=true ! rtpjitterbuffer ! rtph264depay ! h264parse ! v4l2h264dec capture-io-mode=5 ! tiovxmemalloc pool-size=4 ! capsfilter caps="video/x-raw,format=(string)NV12;" ! videoconvert ! hailonet ...
I attempted to replace videoconvert
using hybrid conversion (nv12_to_rgb
) as outlined in the Dataflow Compiler User Guide. However, running the pipeline without videoconvert
fails:
/usr/bin/gst-launch-1.0 -e rtspsrc location=<loc> user-id=<id> user-pw=<pw> is-live=true ! rtpjitterbuffer ! rtph264depay ! h264parse ! v4l2h264dec capture-io-mode=5 ! tiovxmemalloc pool-size=4 ! capsfilter caps="video/x-raw,format=(string)NV12;" ! hailonet ...
I receive the following error:
gst_v4l2_buffer_pool_orphan: assertion 'bpool' failed
...
[HailoRT] [error] Ioctl HAILO_VDMA_BUFFER_MAP failed due to invalid address
...
[HailoRT] [error] Failed map vdma buffer, please make sure using compatible api (dma buffer or raw buffer)
...
[HailoRT] [error] Infer request callback failed with status = HAILO_INVALID_OPERATION(6)
...
Could you advise if:
- On-chip NV12 to RGB conversion is supported and if additional steps are required?
- There is a working example pipeline using
hailonet
that avoidsvideoconvert
? (like it’s described here: KI-Fallstudie: App-Entwicklung mit Edge AI Device)
Thank you in advance for your support!