I am working on integrating object detection and lane detection using Hailo-8 on Raspberry Pi 5. I have modified the hailo-rpi5-examples to use YOLOv8m.hef for object detection and ufld_v2.hef for lane detection. However, I am facing issues with loading the HEF model properly in my custom pipeline.
#Issue Details:
I tried loading the HEF model in my custom scripts (ultrafastLaneDetectorV2.py and yoloDetector.py) using: self.engine = model_path # Temporary approach
However, this does not work because it does not actually load and run inference.
When I checked the original object detection example in hailo-rpi5-examples, I saw that it correctly loads the HEF model.
I want to use the same method that Hailoās original example uses to load the model but I am not sure:
What exact function is used in the example to load the HEF model?
How can I integrate the same function into my custom pipeline?
When I try running inference, I get: AttributeError: āstrā object has no attribute āengine_inferenceā
which indicates that the model is not being loaded correctly.
What I Have Tried:
I searched for functions like .load_hef, .Hef(), or .HEF_load(), but they do not seem to exist in hailo.
Is there a specific function in Hailoās SDK that correctly loads HEF models for inference?
I would really appreciate any guidance on this!
Thanks in advance.
Iām having some trouble integrating my Vehicle CV ADAS project with the Hailo-rpi5-examples using the DeGirum PySDK, and I was hoping you could help me out.
Hereās the situation:
Model Zoo Setup:
I set up a local model zoo at /home/chinmay/hailo_models that contains the following files:
ufld_v2.hef
ufld_v2.json
yolov8m.hef
yolov8m.json
coco_label.txt
JSON Configuration (ufld_v2.json):
My JSON file for the lane detection model (ufld_v2.json) looks like this:
degirum.exceptions.DegirumException: Model 'ufld_v2' is not found in model zoo '/home/chinmay/hailo_models'
Could you please advise on what might be going wrong with the model zoo configuration or how I should correctly load my model locally using DeGirum PySDK? Any guidance or suggestions would be greatly appreciated.
Hi @Chinmay_Mulgund
Our guide had a minor omission but we are unable to edit the already published guide. In the model JSON, please add the following filed: "Checksum": "d6c4d0b9620dc2e5e215dfab366510a740fe86bf2c5d9bd2059a6ba3fe62ee63", below the ConfigVersion. The CheckSum value can be anything.
Thank you for the clarification. I have added the āChecksumā field right after the āConfigVersionā in my ufld_v2.json file (with any value, as suggested). Now my model is correctly recognized by the local model zoo.
While Iām encountering an error when running my ADAS pipeline:
degirum.exceptions.DegirumException: Failed to perform model 'yolov8m' inference: ... HailoRT Runtime Agent: Failed to create VDevice, status = HAILO_DEVICE_IN_USE.
It appears that when I load both the lane detection model (ufld_v2.hef) and the object detection model (yolov8m.hef) concurrently, the Hailo device is already in use. I understand that the Hailo runtime supports only one model instance per device at a time. Could you please advise on the recommended way to share a single device between multiple models using the DeGirum PySDK? Alternatively, should I run the models sequentially, and if so, what is the best practice for releasing the device between models?
Hi @Chinmay_Mulgund
If you want multiple processes to use the device, you can do the following:
In a terminal, start degirum server by typing degirum server --zoo <path to model zoo folder>
In the code, set inference_host_adress='localhost' and zoo_url=''
This setup uses a server client protocol to run models allowing multiple processes to use the device. Alternatively, you can enable hailortās multi process service to have multiple processes use the same device.
Thank you for your previous help. I have started the degirum server with:
degirum server --zoo /home/chinmay/hailo_models
and updated my code so that both my lane detection and object detection models use inference_host_address="localhost" and zoo_url="". However, Iām still encountering the error:
Failed to perform model 'yolov8m' inference: HailoRT Runtime Agent: Failed to create VDevice, status = HAILO_DEVICE_IN_USE.
It appears that when both models are loaded concurrently in my application, the device is claimed by one and then the second cannot acquire a virtual device.
Could you please advise on the best practice for sharing the device? Should I run them in separate processes using the server, or is there a recommended way to share a single device instance between models in one process?
Hi @Chinmay_Mulgund
Can you please run degirum sys-info in terminal and let me know the output? I want to make sure you are using latest version of our PySDK.
Hi Shashi, I followed the installation guide of DeGirum. Same error as before. Firmware version vs DegirumException: Failed to perform model 'yolov8n_relu6_face blah blahā¦[CRITICAL]Loading model failed
HailoRT Runtime Agent: Failed to configure infer model, status = HAILO_UNSUPPORTED_FW_VERSION.
Currently, DeGirum PySDK supports HailoRT versions 4.19.0 and 4.20.0.
Since your device firmware and HailoRT driver are at version 4.21.0, youāre seeing compatibility errors.
You have two options:
Downgrade to HailoRT 4.20.0 and matching drivers
This will let you use PySDK immediately with your Hailo-8L device.
(Youāll need to reinstall the 4.20.0 HailoRT driver and HailoRT libraries onto your system.)
Wait for the upcoming PySDK update
PySDK will support HailoRT 4.21.0 within the next week or two. If you prefer not to downgrade, you can wait for the update.
Hope this helps, let me know if you have any questions
Hi Stephan, I ran the hailo_ai_sw_suite_docker_run.sh and it gave me a couple errors. One that I wasnāt expecting was that you have to run Ubuntu 20.4 or 22.4 (not Raspian OS) but using the Raspian imager, you only have 24.04 (as the āoldestā option) I assume the latest updates will correct this?