Issue with Loading HEF Model in Custom Pipeline

Hi everyone,

I am working on integrating object detection and lane detection using Hailo-8 on Raspberry Pi 5. I have modified the hailo-rpi5-examples to use YOLOv8m.hef for object detection and ufld_v2.hef for lane detection. However, I am facing issues with loading the HEF model properly in my custom pipeline.

#Issue Details:

I tried loading the HEF model in my custom scripts (ultrafastLaneDetectorV2.py and yoloDetector.py) using:
self.engine = model_path # Temporary approach
However, this does not work because it does not actually load and run inference.
When I checked the original object detection example in hailo-rpi5-examples, I saw that it correctly loads the HEF model.

I want to use the same method that Hailo’s original example uses to load the model but I am not sure:

  • What exact function is used in the example to load the HEF model?
  • How can I integrate the same function into my custom pipeline?

error Message:

When I try running inference, I get:
AttributeError: ā€˜str’ object has no attribute ā€˜engine_inference’
which indicates that the model is not being loaded correctly.

What I Have Tried:

I searched for functions like .load_hef, .Hef(), or .HEF_load(), but they do not seem to exist in hailo.
Is there a specific function in Hailo’s SDK that correctly loads HEF models for inference?

I would really appreciate any guidance on this!
Thanks in advance.

Hi @Chinmay_Mulgund
Can you take a look at our PySDK, a python package we developed to simplify application development: Simplifying Edge AI Development with DeGirum PySDK and Hailo

Hi Shashi,

I’m having some trouble integrating my Vehicle CV ADAS project with the Hailo-rpi5-examples using the DeGirum PySDK, and I was hoping you could help me out.

Here’s the situation:

  1. Model Zoo Setup:
    I set up a local model zoo at /home/chinmay/hailo_models that contains the following files:
  • ufld_v2.hef
  • ufld_v2.json
  • yolov8m.hef
  • yolov8m.json
  • coco_label.txt
  1. JSON Configuration (ufld_v2.json):
    My JSON file for the lane detection model (ufld_v2.json) looks like this:
{
    "ConfigVersion": 10,
    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8"
        }
    ],
    "PRE_PROCESS": [
        {
            "InputType": "Tensor",
            "InputN": 1,
            "InputH": 320,
            "InputW": 800,
            "InputC": 3,
            "InputRawDataType": "DG_UINT8"
        }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "ufld_v2.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "OutputPostprocessType": "None"
        }
    ]
}

(I made sure that the "ModelPath" is just the filename, not an absolute path.)

3.Loading the Model:
In my ultrafastLaneDetectorV2.py, I’m trying to load the lane detection model with the following code:

self.engine = dg.load_model(
    model_name="ufld_v2",
    inference_host_address="@local",
    zoo_url="/home/chinmay/hailo_models"
)

However, I keep getting this error:

degirum.exceptions.DegirumException: Model 'ufld_v2' is not found in model zoo '/home/chinmay/hailo_models'

Could you please advise on what might be going wrong with the model zoo configuration or how I should correctly load my model locally using DeGirum PySDK? Any guidance or suggestions would be greatly appreciated.

Thank you very much for your help!

Hi @Chinmay_Mulgund
Our guide had a minor omission but we are unable to edit the already published guide. In the model JSON, please add the following filed: "Checksum": "d6c4d0b9620dc2e5e215dfab366510a740fe86bf2c5d9bd2059a6ba3fe62ee63", below the ConfigVersion. The CheckSum value can be anything.

Hi Shashi,

Thank you for the clarification. I have added the ā€œChecksumā€ field right after the ā€œConfigVersionā€ in my ufld_v2.json file (with any value, as suggested). Now my model is correctly recognized by the local model zoo.

While I’m encountering an error when running my ADAS pipeline:

degirum.exceptions.DegirumException: Failed to perform model 'yolov8m' inference: ... HailoRT Runtime Agent: Failed to create VDevice, status = HAILO_DEVICE_IN_USE.

It appears that when I load both the lane detection model (ufld_v2.hef) and the object detection model (yolov8m.hef) concurrently, the Hailo device is already in use. I understand that the Hailo runtime supports only one model instance per device at a time. Could you please advise on the recommended way to share a single device between multiple models using the DeGirum PySDK? Alternatively, should I run the models sequentially, and if so, what is the best practice for releasing the device between models?

Hi @Chinmay_Mulgund
If you want multiple processes to use the device, you can do the following:

  1. In a terminal, start degirum server by typing degirum server --zoo <path to model zoo folder>
  2. In the code, set inference_host_adress='localhost' and zoo_url=''

This setup uses a server client protocol to run models allowing multiple processes to use the device. Alternatively, you can enable hailort’s multi process service to have multiple processes use the same device.

Let me know if you need further help.

Hi Shashi,

Thank you for your previous help. I have started the degirum server with:

degirum server --zoo /home/chinmay/hailo_models

and updated my code so that both my lane detection and object detection models use inference_host_address="localhost" and zoo_url="". However, I’m still encountering the error:

Failed to perform model 'yolov8m' inference: HailoRT Runtime Agent: Failed to create VDevice, status = HAILO_DEVICE_IN_USE.

It appears that when both models are loaded concurrently in my application, the device is claimed by one and then the second cannot acquire a virtual device.

Could you please advise on the best practice for sharing the device? Should I run them in separate processes using the server, or is there a recommended way to share a single device instance between models in one process?

Hi @Chinmay_Mulgund
Can you please run degirum sys-info in terminal and let me know the output? I want to make sure you are using latest version of our PySDK.

(venv_hailo_rpi5_examples) chinmay@raspberrypi:~/hailo-rpi5-examples $ degirum sys-info
Devices:
  HAILORT/HAILO8:
  - '@Index': 0
    Board Name: Hailo-8
    Device Architecture: HAILO8
    Firmware Version: 4.20.0
    ID: '0000:01:00.0'
    Part Number: ''
    Product Name: ''
    Serial Number: ''
  N2X/CPU:
  - '@Index': 0
  - '@Index': 1
  TFLITE/CPU:
  - '@Index': 0
  - '@Index': 1
Software Version: 0.15.2

1 Like

Hi Shashi, I followed the installation guide of DeGirum. Same error as before. Firmware version vs DegirumException: Failed to perform model 'yolov8n_relu6_face blah blah…[CRITICAL]Loading model failed
HailoRT Runtime Agent: Failed to configure infer model, status = HAILO_UNSUPPORTED_FW_VERSION.

Hello Michael,
It seems like your version of HailoRT does not match with the device’s firmware version.

Please install version 4.20.0 of the Hailo driver onto your system.

Could you also show me the outputs of hailortcli fw-control identify and degirum sys-info?

Thanks Stephan, it seems it wants HailoRT driver v4.21.0 (same as firmware). As requested:
[HailoRT] [warning] Unsupported firmware operation. Host: 4.20.0, Device: 4.21.0
Executing on device: 0001:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.21.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8L
Serial Number: HLDDLBB244200026
Part Number: HM21LB1C2LAE
Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
and degirum sys-info:
Devices:
HAILORT/HAILO8L:

  • ā€˜@Index’: 0
    Board Name: Hailo-8
    Device Architecture: HAILO8L
    Firmware Version: 4.21.0
    ID: ā€˜0001:01:00.0’
    Part Number: HM21LB1C2LAE
    Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
    Serial Number: ā€œHLDDLBB244200026\x10HM21LB1C2LAEā€
    N2X/CPU:
  • ā€˜@Index’: 0
    TFLITE/CPU:
  • ā€˜@Index’: 0
  • ā€˜@Index’: 1
    Software Version: 0.16.0

Hi Michael,

Thanks for sharing details.

Currently, DeGirum PySDK supports HailoRT versions 4.19.0 and 4.20.0.
Since your device firmware and HailoRT driver are at version 4.21.0, you’re seeing compatibility errors.

You have two options:

  1. Downgrade to HailoRT 4.20.0 and matching drivers
    This will let you use PySDK immediately with your Hailo-8L device.
    (You’ll need to reinstall the 4.20.0 HailoRT driver and HailoRT libraries onto your system.)
  2. Wait for the upcoming PySDK update
    PySDK will support HailoRT 4.21.0 within the next week or two. If you prefer not to downgrade, you can wait for the update.

Hope this helps, let me know if you have any questions

Ok thanks Stephan. I will wait for the upgrade

Hi Stephan, I ran the hailo_ai_sw_suite_docker_run.sh and it gave me a couple errors. One that I wasn’t expecting was that you have to run Ubuntu 20.4 or 22.4 (not Raspian OS) but using the Raspian imager, you only have 24.04 (as the ā€œoldestā€ option) I assume the latest updates will correct this?

Hello @Michael_Michael

PySDK now supports 4.21.0 of HailoRT, you can upgrade to the latest version:
pip uninstall degirum -y
pip install degirum

This should now help you enable you to quickly develop applications for your Hailo device. Let me know if you have any other questions

Excellent thanks Stephan