I trained one class (birds) model in ultralytics YOLO and its recall = 0.92 (for confidence threshold 0.01)
after compiling to hef recall became 0.75, I verified on images that model sometimes doesn’t detect objects, especially small ones
Also I noticed that even for detected objects confidence scores are very different to YOLO pytorch model
Is it possible to configure compilation to make less aggressive optimizations?
Are there ways to debug on what stage degradation happened?
I haven’t made any changes in docker hailo_model_zoo config files
command hailomz compile --ckpt /local/shared_with_docker/birds/bird.onnx --calib-path /local/shared_with_docker/birds/birds_frames --yaml hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov11m.yaml --classes 1 --hw-arch hailo8
full log - https://www.dropbox.com/scl/fi/1hkkyevqg6svmsaxc1aeb/birds.log?rlkey=ze7a4obetwxdqwau43feglt3h&st=km0bbhfh&dl=0
code for inference
params = VDevice.create_params()
params.scheduling_algorithm = HailoSchedulingAlgorithm.ROUND_ROBIN
with VDevice(params) as vdevice:
infer_model = vdevice.create_infer_model('/home/bendyna-pi/birds/bird.hef')
infer_model.set_batch_size(1)
infer_model.input().set_format_type(FormatType.FLOAT32)
infer_model.output().set_format_type(FormatType.FLOAT32)
with infer_model.configure() as configured_infer_model:
...
img = img.astype(np.float32)
bindings_list = []
bindings = configured_infer_model.create_bindings()
bindings.input().set_buffer(img)
buffer2 = np.empty(infer_model.output().shape).astype(np.float32)
bindings.output().set_buffer(buffer2)
bindings_list.append(bindings)
configured_infer_model.run(bindings_list, timeout_ms)
buffer = bindings.output().get_buffer()
det_utils = ObjectDetectionUtils('birds.txt')
detections = det_utils.extract_detections(buffer, threshold=0.0001)
boxes = detections['detection_boxes']
conf = detections['detection_scores']
