User Guide 3: Simplifying Object Detection on a Hailo Device Using DeGirum PySDK

@kim_jiseob
Glad to hear that you were able to use our AI hub inference. If possible, we would still like to help you solve the issue with running locally.

Thx @shashi Isn’t it possible to calculate the mAP of a custom model only locally? The fact that the value came out of the cloud meant that I just randomly entered the value and checked that it worked.

@kim_jiseob
It is possible to evaluate locally as well. There is definitely some setup issue on your side that we need to debug.

Yes @shashi The problem of not recognizing the model path is the same as commented in Guide 2. Here is my log:
[‘yolov8n’]
dict_keys([‘DUMMY/DUMMY’, ‘HAILORT/HAILO8L’, ‘N2X/CPU’, ‘TFLITE/CPU’])

There are hef and json files in the directory.

@kim_jiseob
In your case, the model list is not empty as it looks like yolov8n is actually listed as an available model.

When I set the versions of hailort, hailofw, and hailo-dkms to 4.19.0, inference was successful with my custom model. thank you so much @shashi

Hi @kim_jiseob
Amazing. Thanks for letting me know. Please feel free to reach out if you need any further help.

1 Like

Good afternoon, I encountered the following issues.

All my files are located in the following directory:

  • HailoDetectionYolo.py
  • labels_coco.json
  • yolov11n.hef
  • yolov11n.json

in this directory:
yolo_check_5/zoo_url/yolov11n

However, when I use the following script:

import degirum as dg
from pprint import pprint

Load the model from the model zoo.

Replace ‘<path_to_model_zoo>’ with the directory containing your model assets.

model = dg.load_model(
model_name=‘yolov11n’,
inference_host_address=‘@local’,
zoo_url=‘yolo_check_5/zoo_url/yolov11n’
)

I get the following error and can’t figure out what the problem is:

Traceback (most recent call last):
File “er.py”, line 6, in
model = dg.load_model(
^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/init.py”, line 220, in load_model
return zoo.load_model(model_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/zoo_manager.py”, line 266, in load_model
model = self._zoo.load_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.py”, line 309, in load_model
model_params = self.model_info(model)
^^^^^^^^^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “degirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.py”, line 127, in model_info
raise DegirumException(
degirum.exceptions.DegirumException: Model ‘yolov11n’ is not found in model zoo ‘yolo_check_5/zoo_url/yolov11n’

I have tried many solutions, but I haven’t been able to resolve this issue.

Hi @Anton_Sema
Can you please check the supported devices in model json match the device you have?

i have hailo8l “{
“ConfigVersion”: 10,
“DEVICE”: [
{
“DeviceType”: “HAILO8L”,
“RuntimeAgent”: “HAILORT”,
“SupportedDeviceTypes”: “HAILORT/HAILO8L”
}
],
“PRE_PROCESS”: [
{
“InputType”: “Image”,
“InputN”: 1,
“InputH”: 640,
“InputW”: 640,
“InputC”: 3,
“InputPadMethod”: “letterbox”,
“InputResizeMethod”: “bilinear”,
“InputQuantEn”: true
}
],
“MODEL_PARAMETERS”: [
{
“ModelPath”: “yolov11n.hef”
}
],
“POST_PROCESS”: [
{
“OutputPostprocessType”: “Detection”,
“PythonFile”: “HailoDetectionYolo.py”,
“OutputNumClasses”: 1,
“LabelsPath”: “labels_coco.json”,
“OutputConfThreshold”: 0.2
}
]
}” it is yolov11n.json

@Anton_Sema
Please add the below line to your json below the ConfigVersion line:
"Checksum": "d6c4d0b9620dc2e5e215dfab366510a740fe86bf2c5d9bd2059a6ba3fe62ee63",

thanks a lot for help, but now i have such error, i don’t undrestand why "import degirum as dg
from pprint import pprint
import cv2
import numpy as np

Initialize the model from the model zoo

model = dg.load_model(
model_name=‘yolov11n’,
inference_host_address=‘@local’,
zoo_url=‘/home/anton/yolo_check_5/zoo_url/yolov11n’
)

def resize_with_letterbox_image(image, target_height, target_width, padding_value=(0, 0, 0)):
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
h, w, c = image_rgb.shape
scale = min(target_width / w, target_height / h)
new_w = int(w * scale)
new_h = int(h * scale)
resized_image = cv2.resize(image_rgb, (new_w, new_h), interpolation=cv2.INTER_LINEAR)
letterboxed_image = np.full((target_height, target_width, c), padding_value, dtype=np.uint8)
pad_top = (target_height - new_h) // 2
pad_left = (target_width - new_w) // 2
letterboxed_image[pad_top:pad_top+new_h, pad_left:pad_left+new_w] = resized_image
return letterboxed_image, scale, pad_top, pad_left

video_path = “path/to/your/video.mp4”
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
print(“Error: Could not open video.”)
exit()

target_height, target_width = 640, 640 # Remove batch dimension

while True:
ret, frame = cap.read()
if not ret:
break

processed_frame, scale, pad_top, pad_left = resize_with_letterbox_image(frame, target_height, target_width)

# Perform inference with correctly shaped image (H, W, 3)
inference_result = model(processed_frame)

# Debug output
print(type(inference_result), inference_result)

# If the result is a list, extract the first element
if isinstance(inference_result, list):
    inference_result = inference_result[0]

pprint(inference_result.results)

display_frame = cv2.cvtColor(processed_frame, cv2.COLOR_RGB2BGR)
print(f"Scale: {scale:.3f}, Pad top: {pad_top}, Pad left: {pad_left}")

cv2.imshow("Letterboxed Frame", display_frame)
if cv2.waitKey(30) & 0xFF == ord('q'):
    break

cap.release()
cv2.destroyAllWindows()
"

degirum.exceptions.DegirumException: [ERROR]Operation failed
Python postprocessor: forward: ‘list’ object has no attribute ‘get’ [AttributeError] in file ‘HailoDetectionYolo_0335a2b4eba1d2604f54de0e5f844f64.py’, function ‘forward’, line 88
dg_postprocess_client.cpp: 1009 [DG::PostprocessClient::forward]
When running model ‘yolov11n’

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/anton/er.py”, line 42, in
inference_result = model(processed_frame)
^^^^^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 233, in call
return self.predict(data)
^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 224, in predict
res = list(self._predict_impl(source))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 1206, in _predict_impl
raise DegirumException(
degirum.exceptions.DegirumException: Failed to perform model ‘yolov11n’ inference: [ERROR]Operation failed
Python postprocessor: forward: ‘list’ object has no attribute ‘get’ [AttributeError] in file ‘HailoDetectionYolo_0335a2b4eba1d2604f54de0e5f844f64.py’, function ‘forward’, line 88
dg_postprocess_client.cpp: 1009 [DG::PostprocessClient::forward]
When running model ‘yolov11n’

sorry maybe i wrote it badly, here the code , and such error "processed_frame, scale, pad_top, pad_left = resize_with_letterbox_image(frame, target_shape)
inference_result = model(processed_frame)

"

"degirum.exceptions.DegirumException: Image array shape ‘(1, 640, 640, 3)’ is not supported for ‘opencv’ image backend

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/anton/er.py”, line 55, in
inference_result = model(processed_frame)
^^^^^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 233, in call
return self.predict(data)
^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.py”, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 224, in predict
res = list(self._predict_impl(source))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 1206, in _predict_impl
raise DegirumException(
degirum.exceptions.DegirumException: Failed to perform model ‘yolov11n’ inference: Image array shape ‘(1, 640, 640, 3)’ is not supported for ‘opencv’ image backend
"

Hi @Anton_Sema
If you are following this guide, you do not have to send the processed_frame. You can directly send the image path to inference.