recompiled hefs on hailo8

I’m using the gstreamer pattern to try to run an inference pipeline on the rpi with a hailo8. When I run a pipeline with gstreamer, I’m consistently getting the errors below:

[2025-10-07 11:49:42.371] [17031] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: yolov5m_vehicles

[2025-10-07 11:49:42.372] [17023] [HailoRT] [error] [driver_os_specific.cpp:153] [convert_errno_to_hailo_status] Ioctl HAILO_VDMA_BUFFER_MAP failed due to invalid address

[2025-10-07 11:49:42.372] [17023] [HailoRT] [error] [hailort_driver.cpp:998] [vdma_buffer_map_ioctl] CHECK_SUCCESS failed with status=HAILO_INVALID_OPERATION(6) - Failed map vdma buffer, please make sure using compatible api(dma buffer or raw buffer)

I can successfully run hailo run <path_to_hef> with this hef file, so I don’t think its the hef file itself. Is there some way to get more information out of gstreamer to figure out what’s going on? is this a stream shape or format issue?

Welcome to the Hailo Community!

To allow us to help, it would be useful if you could share some more information:

  • Hailo Dataflow Compiler version
  • HailoRT version
  • What output do you get when you run hailortcli parse-hef model.hef?
  • Your GStreamer pipeline (use </> Preformatted Text feature to make sure it is easy to read in the forum)
  • Did you run the official examples successfully?

GitHub - Hailo RPi5 Examples

GStreamer has some built-in features to review the pipeline. Try this:

Set an environment variable

export GST_DEBUG_DUMP_DOT_DIR=.

Launch your pipeline

gst-launch-1.0 videotestsrc ! autovideosink

Convert the *.dot files into PNG or PDF

dot -Tpng pipeline_PLAYING.dot -o pipeline.png

The following page forum page from my colleague may be useful as well.

Hailo Community - How to debug GStreamer pipeline performance

Huge! Thanks for the pointer. I have been fiddling around with my pipeline to no avail as of yet, though this is what i come up with:

From what I can tell, data is getting to the hailonet, but it might not be in the right format. I’m not sure how to format it correctly - i tried to use a capfilter to ensure everything was the right shape, but maybe i need a separate convert step for the hef:

Architecture HEF was compiled for: HAILO8L Network group name: yolov5m_vehicles, Multi Context - Number of contexts: 6 Network name: yolov5m_vehicles/yolov5m_vehicles VStream infos: Input yolov5m_vehicles/input_layer1 UINT8, F8CR(1080x1920x3)

Hi @mike_f
What is the input source you are trying to use? Is it RPi camera? (libcamera plugin is not well supported)
Regarding the gstreamer pipeline, You are using a jpeg/mjpeg input. This has to be decoded before you can use it for inference. Add a decodebin element to handle it. In addition you might need to rescale the frame to fit Hailo’s input. you can use videoscale for this.
I suggest you’ll start with our detection example in hailo-apps-infra/hailo_apps/hailo_app_python/apps/detection at main · hailo-ai/hailo-apps-infra · GitHub and continue from there. We have prepared gstreamer pipeline helper functions to help you build pipelines faster with “working” sub pipelines see hailo-apps-infra/doc/developer_guide/app_development.md at main · hailo-ai/hailo-apps-infra · GitHub
If there are still issues with the network itself please share your build scripts.

I am using rpi camera; is there a different source I should be using? should I just use a usb camera instead? the INFERENCE_PIPELINE helper function is working great, but I’m getting new errors when I try to use the CROPPER_PIPELINE

I suggest checking our python example for integration with the rpi camera.
This is running python on top if gstreamer to push frames from picamera.
It has more features you can utilize.

you know that exact example was where i started, but i found that it didn’t run on the version of hailort that i had installed on my pi. additionally, it looks like that example doesn’t aggregate the cropped images back into the original image, but dumps them back into an ocr sink postprocessing function that i haven’t been able to find or compile myself - i also don’t have the ocr_overlay_so file that (i would guess) receives the data from the post processor.

Do you know if those are available in any way other than compiling them from source in the tappas repo? i tried doing that as well, but i can’t even get yocto to run after following their install instructions.

Can you explain what you are trying to do, and in which os?

I would be happy to replicate the original alpr example. I’m just trying to get started and understand how a pipeline like this works, but I can’t even get the more complicated examples to work.

I’m using raspberry pi OS to actually perform the interference but I’m developing on Ubuntu

Hey @mike_f,

We’re planning to release an updated LPR version in hailo-apps-infra soon.

What examples in the apps-infra is not working ?

In the meantime, I’d recommend copying the pipeline and running it through hailo-apps-infra rather than directly through TAPPAS. You can use the TAPPAS-core elements that are already installed on the RPi to build the pipeline - basically the same way it was done in the old LPR.

Let me know if you need any help with that!