Issue with Loading HEF Model in Custom Pipeline

Hi everyone,

I am working on integrating object detection and lane detection using Hailo-8 on Raspberry Pi 5. I have modified the hailo-rpi5-examples to use YOLOv8m.hef for object detection and ufld_v2.hef for lane detection. However, I am facing issues with loading the HEF model properly in my custom pipeline.

#Issue Details:

I tried loading the HEF model in my custom scripts (ultrafastLaneDetectorV2.py and yoloDetector.py) using:
self.engine = model_path # Temporary approach
However, this does not work because it does not actually load and run inference.
When I checked the original object detection example in hailo-rpi5-examples, I saw that it correctly loads the HEF model.

I want to use the same method that Hailo’s original example uses to load the model but I am not sure:

  • What exact function is used in the example to load the HEF model?
  • How can I integrate the same function into my custom pipeline?

error Message:

When I try running inference, I get:
AttributeError: ‘str’ object has no attribute ‘engine_inference’
which indicates that the model is not being loaded correctly.

What I Have Tried:

I searched for functions like .load_hef, .Hef(), or .HEF_load(), but they do not seem to exist in hailo.
Is there a specific function in Hailo’s SDK that correctly loads HEF models for inference?

I would really appreciate any guidance on this!
Thanks in advance.

Hi @Chinmay_Mulgund
Can you take a look at our PySDK, a python package we developed to simplify application development: Simplifying Edge AI Development with DeGirum PySDK and Hailo

Hi Shashi,

I’m having some trouble integrating my Vehicle CV ADAS project with the Hailo-rpi5-examples using the DeGirum PySDK, and I was hoping you could help me out.

Here’s the situation:

  1. Model Zoo Setup:
    I set up a local model zoo at /home/chinmay/hailo_models that contains the following files:
  • ufld_v2.hef
  • ufld_v2.json
  • yolov8m.hef
  • yolov8m.json
  • coco_label.txt
  1. JSON Configuration (ufld_v2.json):
    My JSON file for the lane detection model (ufld_v2.json) looks like this:
{
    "ConfigVersion": 10,
    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8"
        }
    ],
    "PRE_PROCESS": [
        {
            "InputType": "Tensor",
            "InputN": 1,
            "InputH": 320,
            "InputW": 800,
            "InputC": 3,
            "InputRawDataType": "DG_UINT8"
        }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "ufld_v2.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "OutputPostprocessType": "None"
        }
    ]
}

(I made sure that the "ModelPath" is just the filename, not an absolute path.)

3.Loading the Model:
In my ultrafastLaneDetectorV2.py, I’m trying to load the lane detection model with the following code:

self.engine = dg.load_model(
    model_name="ufld_v2",
    inference_host_address="@local",
    zoo_url="/home/chinmay/hailo_models"
)

However, I keep getting this error:

degirum.exceptions.DegirumException: Model 'ufld_v2' is not found in model zoo '/home/chinmay/hailo_models'

Could you please advise on what might be going wrong with the model zoo configuration or how I should correctly load my model locally using DeGirum PySDK? Any guidance or suggestions would be greatly appreciated.

Thank you very much for your help!

Hi @Chinmay_Mulgund
Our guide had a minor omission but we are unable to edit the already published guide. In the model JSON, please add the following filed: "Checksum": "d6c4d0b9620dc2e5e215dfab366510a740fe86bf2c5d9bd2059a6ba3fe62ee63", below the ConfigVersion. The CheckSum value can be anything.

Hi Shashi,

Thank you for the clarification. I have added the “Checksum” field right after the “ConfigVersion” in my ufld_v2.json file (with any value, as suggested). Now my model is correctly recognized by the local model zoo.

While I’m encountering an error when running my ADAS pipeline:

degirum.exceptions.DegirumException: Failed to perform model 'yolov8m' inference: ... HailoRT Runtime Agent: Failed to create VDevice, status = HAILO_DEVICE_IN_USE.

It appears that when I load both the lane detection model (ufld_v2.hef) and the object detection model (yolov8m.hef) concurrently, the Hailo device is already in use. I understand that the Hailo runtime supports only one model instance per device at a time. Could you please advise on the recommended way to share a single device between multiple models using the DeGirum PySDK? Alternatively, should I run the models sequentially, and if so, what is the best practice for releasing the device between models?

Hi @Chinmay_Mulgund
If you want multiple processes to use the device, you can do the following:

  1. In a terminal, start degirum server by typing degirum server --zoo <path to model zoo folder>
  2. In the code, set inference_host_adress='localhost' and zoo_url=''

This setup uses a server client protocol to run models allowing multiple processes to use the device. Alternatively, you can enable hailort’s multi process service to have multiple processes use the same device.

Let me know if you need further help.

Hi Shashi,

Thank you for your previous help. I have started the degirum server with:

degirum server --zoo /home/chinmay/hailo_models

and updated my code so that both my lane detection and object detection models use inference_host_address="localhost" and zoo_url="". However, I’m still encountering the error:

Failed to perform model 'yolov8m' inference: HailoRT Runtime Agent: Failed to create VDevice, status = HAILO_DEVICE_IN_USE.

It appears that when both models are loaded concurrently in my application, the device is claimed by one and then the second cannot acquire a virtual device.

Could you please advise on the best practice for sharing the device? Should I run them in separate processes using the server, or is there a recommended way to share a single device instance between models in one process?

Hi @Chinmay_Mulgund
Can you please run degirum sys-info in terminal and let me know the output? I want to make sure you are using latest version of our PySDK.

(venv_hailo_rpi5_examples) chinmay@raspberrypi:~/hailo-rpi5-examples $ degirum sys-info
Devices:
  HAILORT/HAILO8:
  - '@Index': 0
    Board Name: Hailo-8
    Device Architecture: HAILO8
    Firmware Version: 4.20.0
    ID: '0000:01:00.0'
    Part Number: ''
    Product Name: ''
    Serial Number: ''
  N2X/CPU:
  - '@Index': 0
  - '@Index': 1
  TFLITE/CPU:
  - '@Index': 0
  - '@Index': 1
Software Version: 0.15.2