Hello,
I managed to get accuracy up to correct values after conversion, but I’m getting random false positives.
These false positives are never even similar looking, its often like a stick of wood, or a bowl. Sometimes few pixels wide long line which is even more strange.
Non of these false positives show up when I run detection using onnx file through ultralytics.
I have been trying everything I could find here on forum, but with no results.
Does anyone have experience with these kind of false positives?
imgsz 640
model yolov11s
dataset size 3559
Only one label
I have been thinking about trying quantization_param(output_layer, precision_mode=a16_w16) to improve accuracy as recommended, but I always get a error message “could not be found in scope”. It does not accept any output layer shown by profiler (output_layer1 to 6).
Hi @Tomaae
We do not need the training dataset. The pytorch checkpoint, a small set for calibration (64-256 images) and some example images where you see these false positives are all we need.
Sure, here it is: https://filebin.net/6d0ugrrfbcbm3v7e
it contains pt file, 100 images with label files, and short trimmed video capturing false positive (very top of a bowl once its placed down for a moment). I have also took a screenshot, but not sure if that will work. I only have one hailo device, so cant run anything that requires hailo hardware on my processing machine unfortunately.
Here is image with 2 false positives (both are almost uniform color carrot shapes). There 2 are not as questionable as shape can be considered to be somewhat similar. These are static and always show as 90%-95% accuracy.
Uploaded using link, to make sure forum wont convert the image: https://filebin.net/0ko7gfqva80q0wg8