Hi,
I’ve managed to use my model in python thanks to version 4.18, I get a result with coherent boxes but badly positioned, I’ve tried lots of things, but nothing that corrects this, on the spot I wonder what I’ve done wrong.
I create my model, get the dimensions it manages:
self.hailo_inference = HailoInference(os.getenv('MODEL_PATH'))
self.height, self.width, _ = self.hailo_inference.get_input_shape()
I convert my image (CV2 => PIL) and put it in square format with black margins (my image is well transformed):
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image = Image.fromarray(image)
processed_image = preprocess(image, self.width, self.height)
I send the image to inference:
raw_detections = self.hailo_inference.run(np.array(processed_image))
And here are the results I get:
[0.4765124 , 0.91401637, 0.52448344, 0.9630216 , 0.8998809 ]
From what I understand, the table returns :
y_min, x_min, y_max, x_max, and the score
When I try to convert these results into “YOLO” equivalents, I get this, gor example, for the first result :
Class : 0
Score : 0.8998808860778809
x_center : 0.9385189712047577
y_center : 0.5004979223012924
width : 0.049005210399627686
height : 0.04797104001045227
And in pixels on the image it looks like this :
Image width : 1024
Image height : 1024
X min : 936
Y min : 488
X max : 986
Y max : 537
And what I should get (result of the same inference but on my original model with ultralytics, YOLO and my pytorch model):
Class : 0
Score : 0.95
x_center : 0.5867246627807617
y_center : 0.16733388547544126
width : 0.030229949951171876
height : 0.05325857091833044
And in pixels on the image it looks like this :
Image width : 1920
Image height : 1080
X min : 1097
Y min : 151
X max : 1155
Y max : 209
Thanks for help