yolov11s large number of false positives

Hello,
I managed to get accuracy up to correct values after conversion, but I’m getting random false positives.
These false positives are never even similar looking, its often like a stick of wood, or a bowl. Sometimes few pixels wide long line which is even more strange.
Non of these false positives show up when I run detection using onnx file through ultralytics.
I have been trying everything I could find here on forum, but with no results.
Does anyone have experience with these kind of false positives?

imgsz 640
model yolov11s
dataset size 3559
Only one label

hailomz compile yolov11s --ckpt=best.onnx  --hw-arch hailo8l --calib-path /mnt/d/Dokumenty/Web/tensorflow/yolo_v11/train/images --classes 1

alls:

normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
change_output_activation(conv54, sigmoid)
change_output_activation(conv65, sigmoid)
change_output_activation(conv80, sigmoid)
model_optimization_config(calibration, batch_size=8, calibset_size=2028)
model_optimization_flavor(optimization_level=1, compression_level=0, batch_size=8)
quantization_param([conv54, conv65, conv80], force_range_out=[0.0, 1.0])
post_quantization_optimization(finetune, policy=enabled, learning_rate=0.000025)
nms_postprocess("../../postprocess_config/yolov11s_nms_config.json", meta_arch=yolov8, engine=cpu)
performance_param(compiler_optimization_level=max)

Versions, need to be 4.19 for frigate:

(hailodfc) ~/hailodfc$ pip freeze|grep hailo
hailo-dataflow-compiler @ file:///home/tomaae/hailo/hailo_dataflow_compiler-3.30.0-py3-none-linux_x86_64.whl
-e git+https://github.com/hailo-ai/hailo_model_zoo.git@b7fa404d8f1aaea74dff35ed1c3aae75d631cacd#egg=hailo_model_zoo
hailort @ file:///home/tomaae/hailort-4.19.0-cp310-cp310-linux_x86_64.whl

Hi @Tomaae
Do you know if these false positives occur when you compile without post_quantization_optimization as well?

Hi,
Yes, they do. They will occur even with just normalization., change output activation and nms postprocess.

I have been thinking about trying quantization_param(output_layer, precision_mode=a16_w16) to improve accuracy as recommended, but I always get a error message “could not be found in scope”. It does not accept any output layer shown by profiler (output_layer1 to 6).

Hi @Tomaae
We can help you dig deeper if you can share your pytorch checkpoint, calibration, and validation data.

Sure, I can, depending on their size of course. My dataset is 4.6G and validation images are 470M
Which files aside from last.pt do you need to see?

Hi @Tomaae
We do not need the training dataset. The pytorch checkpoint, a small set for calibration (64-256 images) and some example images where you see these false positives are all we need.

Sure, here it is: https://filebin.net/6d0ugrrfbcbm3v7e
it contains pt file, 100 images with label files, and short trimmed video capturing false positive (very top of a bowl once its placed down for a moment). I have also took a screenshot, but not sure if that will work. I only have one hailo device, so cant run anything that requires hailo hardware on my processing machine unfortunately.

Here is image with 2 false positives (both are almost uniform color carrot shapes). There 2 are not as questionable as shape can be considered to be somewhat similar. These are static and always show as 90%-95% accuracy.
Uploaded using link, to make sure forum wont convert the image: https://filebin.net/0ko7gfqva80q0wg8

Getting more and more of them. Like tiny piece of paper or mound of sand.
Really not sure what to try at this point.