Offset bounding box 1024x1024 Yolov8s

Hello everyone, I’ve been experimenting with Yolov8s training and have trained it to detect a 50x50 cm red ball. My resolution 1024x1024, I configurate more YAML file to resolution 1024х1024

results = model.train(
data=f’{dataset_dir}/data.yaml’,
epochs=50,
imgsz=1024,
device=“0,1”,
batch=8,
name=‘Ball1024’
)
После этого я преобразовал в onnx и с помощью hailomz конвертировал в .hef

hailomz compile
–ckpt runs/detect/Ball10243/weights/best.onnx
–calib-path Ball_cvat/valid/images
–yaml hailo_model_zoo/hailo_model_zoo/cfg/networks/yolov8s.yaml
–classes 1
–end-node-names /model.22/cv2.0/cv2.0.2/Conv /model.22/cv3.0/cv3.0.2/Conv /model.22/cv2.1/cv2.1.2/Conv /model.22/cv3.1/cv3.1.2/Conv /model.22/cv2.2/cv2.2.2/Conv /model.22/cv3.2/cv3.2.2/Conv
–hw-arch hailo8

Then I ran hailo-apps/hailo_apps/python/pipeline_apps/detection/detection.py at main · hailo-ai/hailo-apps · GitHub

My final goal is to make the red ball detect at higher resolution, 1024x1024, but I have a bounding box that is moving, more precisely, the model detects the ball but the bounding box is shifted down to the right, how can I fix this? I would greatly appreciate your help!!!

Hi @Andrey_Inozemtsev,

One thing worth checking is the NMS postprocess section in the YAML - it’s possible some parameters there are still based on 640x640.

The yolov8s.yaml has an alls / postprocess / nms section that may contain grid sizes or feature map dimensions tied to the original resolution. For 640x640 those grids would be [80, 40, 20] (strides [8, 16, 32]), and for 1024x1024 they’d need to be [128, 64, 32]. A mismatch there could explain the “shifted down-right” behavior you’re seeing.

It might be worth double-checking if any hardcoded image_dims, grid dimensions, or output shapes in that section still reference 640. If you’d like, feel free to share your modified YAML and I can take a look.

Thanks,