Bypassing `videoconvert` via On-Chip NV12 to RGB Conversion

Hi,

I’d like to know if there’s a way to completely eliminate the use of videoconvert in my pipeline by offloading the color conversion (NV12 to RGB) onto the Hailo device itself.

The model is matching the camera’s native resolution. The current working pipeline includes videoconvert:

/usr/bin/gst-launch-1.0 -e rtspsrc location=<loc> user-id=<id> user-pw=<pw> is-live=true ! rtpjitterbuffer ! rtph264depay ! h264parse ! v4l2h264dec capture-io-mode=5 ! tiovxmemalloc pool-size=4 ! capsfilter caps="video/x-raw,format=(string)NV12;" ! videoconvert ! hailonet ...

I attempted to replace videoconvert using hybrid conversion (nv12_to_rgb) as outlined in the Dataflow Compiler User Guide. However, running the pipeline without videoconvert fails:

/usr/bin/gst-launch-1.0 -e rtspsrc location=<loc> user-id=<id> user-pw=<pw> is-live=true ! rtpjitterbuffer ! rtph264depay ! h264parse ! v4l2h264dec capture-io-mode=5 ! tiovxmemalloc pool-size=4 ! capsfilter caps="video/x-raw,format=(string)NV12;" ! hailonet ...

I receive the following error:

gst_v4l2_buffer_pool_orphan: assertion 'bpool' failed
...
[HailoRT] [error] Ioctl HAILO_VDMA_BUFFER_MAP failed due to invalid address
...
[HailoRT] [error] Failed map vdma buffer, please make sure using compatible api (dma buffer or raw buffer)
...
[HailoRT] [error] Infer request callback failed with status = HAILO_INVALID_OPERATION(6)
...

Could you advise if:

  1. On-chip NV12 to RGB conversion is supported and if additional steps are required?
  2. There is a working example pipeline using hailonet that avoids videoconvert? (like it’s described here: KI-Fallstudie: App-Entwicklung mit Edge AI Device)

Thank you in advance for your support!

Hey @florian,

The error you’re seeing is happening because videoconvert was removed from your pipeline, but the upstream element (e.g., v4l2h264dec) isn’t outputting in a memory layout (dmabuf, mmap, etc.) that hailonet supports. This usually leads to a failure in caps negotiation, which then causes issues like incorrect color format or invalid strides being passed to hailonet.

1. Yes, Hailo supports this setup.

To make it work, compile your model with input conversion like this:

hailomz compile <model_name> --input-conversion nv12_to_rgb

Also, make sure:

  • The decoder (v4l2h264dec) is configured to output dmabuf-compatible buffers.
  • The output format is exactly NV12 in a compatible layout — preferably dmabuf.

2. There’s no official Hailo example for this, but here’s how I’d build your pipeline based on what you shared:

... ! v4l2h264dec capture-io-mode=5 ! \
tiovxmemalloc pool-size=4 ! \
capsfilter caps="video/x-raw(memory:DMABuf), format=(string)NV12;" ! \
hailonet ...

Just make sure you’ve handled step 1 above first — getting the memory layout and format right is key.

Let me know if you hit any roadblocks setting it up.