Emulator SDK_Native Confidence Score Exceeding 100%

Hello Hailo,

I am AI Engineer Developing Deeplearning Model and Deploying that

I installed hailo_ai_sw_suite virtual environments, and I’ve been interpreting tutorial on generated virtual env

and, I trained my classification model within Yolov5.

i trained Annotation Data of Classification model with made in Roboflow

It’s Roboflow Label Parameter

Preprocessing Auto-Orient

Augmentations
Rotation Between -15 and +15
Shear 15Horizontal, 15 Vertical
Saturation Between -25 and +25%

This is Parameter when Using Yolov5 Training:
(train.py Parameter)
–img 224 or 640
–epoch 50
–batch 64 or 8
–pretrained true
–model ./yolov5s.pt

(export.py Parameter)
–img 224 or 640 or 1126 738
–batch-size 1 or 8

This Image Resource Consist of Custom Data Sets, That’s Original Image Size is 1126 by 738

and, I confirmed to Normally about Yolo Model Score, executing predict.py with Python Interpretor.

It is checked Confidence Score is Between 0 to 1 for Yolo Classification model, also It was no problem parsing to HAR Format from onnx

HOWEVER. I’ve met something problem on Next Step. it doesn’t be something on them Error Messages.

I’m calling SDK_Native API on DFC_2_Model_Optimization_Tutorial, Using Emulator Inference and
then, When I looked up Confidence Score which is the inference result,

the issue is that the confidence value exceeds 100%.

Isn’t the Confidence Score clearly supposed to be a value between 0 and 100%?

Why does the inferred result exceed 100%, even going as high as 300%?

I post task that tried to resolved.

  1. I have tried to modify the mean values and std values of Normalize Processing
  2. it Confirmed confidence score after certain Image data is copyed, as a result, It gains the same score for each images
  3. It generated Custom Model newly by resizing Image. as a inference result, The Score doesn’t stabilize for 100%.
  4. When It Changed to normalize parameter to false, The Score exceeds 1000%.
  5. after modify for each parameter such as ‘–img-size, --batch-size’, running Yolo Script as export.py. result is same.
  6. In this example, using the provided ResNet_v1_18 model, the inference results with my custom dataset are within the 100% range
    (of course, the AI judgment is based on the range within the provided JSON file).

Besides this, I have repeatedly gone through the process of regenerating the YOLO model and converting the ONNX file into a HAR file,
but I don’t know to find a way to stabilize the score.

I seek help from Hailo. Thank you

Regard Hailo.

Hi @ekjeong, I think that something is not right on the way the logits are treated. Maybe a softmax or sigmoid are missing at the end.
What pacakge are you using to validate the accuracy? model-zoo or the dfc?

Hi, @Nadav,
Im using DataFlowCompile package to validate the accuracy.

Thanks for concern about my contents.

@Nadav

Could you suggest to me How to Apply softmax processing using DataFlowCompiler?

is there softmax function in ClientRunner?

Thanks

Thanks @Nadav

I Tried to generates Custom Softmax function and that was applied

As a Result, For Each classes scores’s Total sum is converged at number ‘1’ and Top class is correct well.

is there something other way the getting softmax?
I want to reference your official answer!

thanks for your solution.