@kim_jiseob
Glad to hear that you were able to use our AI hub inference. If possible, we would still like to help you solve the issue with running locally.
Thx @shashi Isnât it possible to calculate the mAP of a custom model only locally? The fact that the value came out of the cloud meant that I just randomly entered the value and checked that it worked.
@kim_jiseob
It is possible to evaluate locally as well. There is definitely some setup issue on your side that we need to debug.
Yes @shashi The problem of not recognizing the model path is the same as commented in Guide 2. Here is my log:
[âyolov8nâ]
dict_keys([âDUMMY/DUMMYâ, âHAILORT/HAILO8Lâ, âN2X/CPUâ, âTFLITE/CPUâ])
There are hef and json files in the directory.
@kim_jiseob
In your case, the model list is not empty as it looks like yolov8n is actually listed as an available model.
When I set the versions of hailort, hailofw, and hailo-dkms to 4.19.0, inference was successful with my custom model. thank you so much @shashi
Hi @kim_jiseob
Amazing. Thanks for letting me know. Please feel free to reach out if you need any further help.
Good afternoon, I encountered the following issues.
All my files are located in the following directory:
HailoDetectionYolo.py
labels_coco.json
yolov11n.hef
yolov11n.json
in this directory:
yolo_check_5/zoo_url/yolov11n
However, when I use the following script:
import degirum as dg
from pprint import pprint
Load the model from the model zoo.
Replace â<path_to_model_zoo>â with the directory containing your model assets.
model = dg.load_model(
model_name=âyolov11nâ,
inference_host_address=â@localâ,
zoo_url=âyolo_check_5/zoo_url/yolov11nâ
)
I get the following error and canât figure out what the problem is:
Traceback (most recent call last):
File âer.pyâ, line 6, in
model = dg.load_model(
^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/init.pyâ, line 220, in load_model
return zoo.load_model(model_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/zoo_manager.pyâ, line 266, in load_model
model = self._zoo.load_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.pyâ, line 309, in load_model
model_params = self.model_info(model)
^^^^^^^^^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File âdegirum_env/lib/python3.11/site-packages/degirum/_zoo_accessor.pyâ, line 127, in model_info
raise DegirumException(
degirum.exceptions.DegirumException: Model âyolov11nâ is not found in model zoo âyolo_check_5/zoo_url/yolov11nâ
I have tried many solutions, but I havenât been able to resolve this issue.
Hi @Anton_Sema
Can you please check the supported devices in model json match the device you have?
i have hailo8l â{
âConfigVersionâ: 10,
âDEVICEâ: [
{
âDeviceTypeâ: âHAILO8Lâ,
âRuntimeAgentâ: âHAILORTâ,
âSupportedDeviceTypesâ: âHAILORT/HAILO8Lâ
}
],
âPRE_PROCESSâ: [
{
âInputTypeâ: âImageâ,
âInputNâ: 1,
âInputHâ: 640,
âInputWâ: 640,
âInputCâ: 3,
âInputPadMethodâ: âletterboxâ,
âInputResizeMethodâ: âbilinearâ,
âInputQuantEnâ: true
}
],
âMODEL_PARAMETERSâ: [
{
âModelPathâ: âyolov11n.hefâ
}
],
âPOST_PROCESSâ: [
{
âOutputPostprocessTypeâ: âDetectionâ,
âPythonFileâ: âHailoDetectionYolo.pyâ,
âOutputNumClassesâ: 1,
âLabelsPathâ: âlabels_coco.jsonâ,
âOutputConfThresholdâ: 0.2
}
]
}â it is yolov11n.json
@Anton_Sema
Please add the below line to your json below the ConfigVersion line:
"Checksum": "d6c4d0b9620dc2e5e215dfab366510a740fe86bf2c5d9bd2059a6ba3fe62ee63",
thanks a lot for help, but now i have such error, i donât undrestand why "import degirum as dg
from pprint import pprint
import cv2
import numpy as np
Initialize the model from the model zoo
model = dg.load_model(
model_name=âyolov11nâ,
inference_host_address=â@localâ,
zoo_url=â/home/anton/yolo_check_5/zoo_url/yolov11nâ
)
def resize_with_letterbox_image(image, target_height, target_width, padding_value=(0, 0, 0)):
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
h, w, c = image_rgb.shape
scale = min(target_width / w, target_height / h)
new_w = int(w * scale)
new_h = int(h * scale)
resized_image = cv2.resize(image_rgb, (new_w, new_h), interpolation=cv2.INTER_LINEAR)
letterboxed_image = np.full((target_height, target_width, c), padding_value, dtype=np.uint8)
pad_top = (target_height - new_h) // 2
pad_left = (target_width - new_w) // 2
letterboxed_image[pad_top:pad_top+new_h, pad_left:pad_left+new_w] = resized_image
return letterboxed_image, scale, pad_top, pad_left
video_path = âpath/to/your/video.mp4â
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
print(âError: Could not open video.â)
exit()
target_height, target_width = 640, 640 # Remove batch dimension
while True:
ret, frame = cap.read()
if not ret:
break
processed_frame, scale, pad_top, pad_left = resize_with_letterbox_image(frame, target_height, target_width)
# Perform inference with correctly shaped image (H, W, 3)
inference_result = model(processed_frame)
# Debug output
print(type(inference_result), inference_result)
# If the result is a list, extract the first element
if isinstance(inference_result, list):
inference_result = inference_result[0]
pprint(inference_result.results)
display_frame = cv2.cvtColor(processed_frame, cv2.COLOR_RGB2BGR)
print(f"Scale: {scale:.3f}, Pad top: {pad_top}, Pad left: {pad_left}")
cv2.imshow("Letterboxed Frame", display_frame)
if cv2.waitKey(30) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
"
degirum.exceptions.DegirumException: [ERROR]Operation failed
Python postprocessor: forward: âlistâ object has no attribute âgetâ [AttributeError] in file âHailoDetectionYolo_0335a2b4eba1d2604f54de0e5f844f64.pyâ, function âforwardâ, line 88
dg_postprocess_client.cpp: 1009 [DG::PostprocessClient::forward]
When running model âyolov11nâ
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File â/home/anton/er.pyâ, line 42, in
inference_result = model(processed_frame)
^^^^^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.pyâ, line 233, in call
return self.predict(data)
^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.pyâ, line 224, in predict
res = list(self._predict_impl(source))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.pyâ, line 1206, in _predict_impl
raise DegirumException(
degirum.exceptions.DegirumException: Failed to perform model âyolov11nâ inference: [ERROR]Operation failed
Python postprocessor: forward: âlistâ object has no attribute âgetâ [AttributeError] in file âHailoDetectionYolo_0335a2b4eba1d2604f54de0e5f844f64.pyâ, function âforwardâ, line 88
dg_postprocess_client.cpp: 1009 [DG::PostprocessClient::forward]
When running model âyolov11nâ
sorry maybe i wrote it badly, here the code , and such error "processed_frame, scale, pad_top, pad_left = resize_with_letterbox_image(frame, target_shape)
inference_result = model(processed_frame)
"
"degirum.exceptions.DegirumException: Image array shape â(1, 640, 640, 3)â is not supported for âopencvâ image backend
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File â/home/anton/er.pyâ, line 55, in
inference_result = model(processed_frame)
^^^^^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.pyâ, line 233, in call
return self.predict(data)
^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/log.pyâ, line 59, in wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.pyâ, line 224, in predict
res = list(self._predict_impl(source))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File â/home/anton/degirum_env/lib/python3.11/site-packages/degirum/model.pyâ, line 1206, in _predict_impl
raise DegirumException(
degirum.exceptions.DegirumException: Failed to perform model âyolov11nâ inference: Image array shape â(1, 640, 640, 3)â is not supported for âopencvâ image backend
"
Hi @Anton_Sema
If you are following this guide, you do not have to send the processed_frame. You can directly send the image path to inference.