Problem With Model Optimization

I was trying to compile a custom trained yolov8m model. During optimization it gives following error:

1

Could someone explain the error message in detail? Especially what does “Desired shift is 19.0, but op has only 8 data bits” means.

Hi,
Are you using real images in the calibration set? Are they normalized as it should?

yes the calibration set are real images from validation dataset.

And I found that this error will not occur if the yolov8m model is trained under 10 epochs. When trained over 10 epochs, the error occurs.

This is strange. The error means that there is very big difference between input and output of that specific layer (conv83), which requires a 19b shift, in this case.
Are you compiling it in the model-zoo? Using the provided alls file?

It is a custom trained single class detection yolov8m model. The following is the alls lines I used:

Have same issue but running 20epochs. More then that same error occur. Have a dataset of 4124 training images 1048 validation images and 281 calibration(but script takes only 64)

raise NegativeSlopeExponentNonFixable(

hailo_model_optimization.acceleras.utils.acceleras_exceptions.NegativeSlopeExponentNonFixable: Quantization failed in layer yolov8n/conv63 due to unsupported required slope. Desired shift is 10.0, but op has only 8 data bits. This error raises when the data or weight range are not balanced. Mostly happens when using random calibration-set/weights, the calibration-set is not normalized properly or batch-normalization was not used during training.

Is this dataset publically available?

Is this dataset publically available?

May I know how many classes do you have?

sure have 1 class. But my images from synthetic dataset

So I am guessing if it is related to single class training issue. Where due to the lack of diversity of features in single class dataset, the trained model may prone to produce extreme values, also easier to overfit to training dataset. Single class training may also lead to imbalanced gradients?

Purely guessing.

1 Like

I had some progress in few days of research. Here are my conclusions:

  • I can compile to .hef any model if epoch<20
  • I cant compile not pretrained custom model with 1 class via(yolo detect train data=dataset.yaml model=yolov8n.yaml)
  • I can compile model on many epochs with “pretrained=False” via (yolo detect train data=dataset.yaml model=yolov8n.pt epochs=300 imgsz=640 name=test1 batch=32 pretrained=False) also changed calibration images from synthetic to real. Something of that helped.

But still cant compile model that starts training from 0 mAP50

The “pretrained = False” setting does not work for me. But I was training with yolov8m model.

When I train with yolov8n model, no error occurs, no matter what epoch number is used.

let me know pls if youll find solution first. And I will post here if ill find mine=)

I am facing the same issue while using a Yolov8n network with 2 classes. Class 1 is very easy to train and recognize (needs just a few epochs), class 2 is rather hard and needs a few epochs. After training 10 epochs where class 1 is already perfect and class 2 is “usable” I’ve tried to compile the network with the exact same issue. Unfortunately, it is an internal dataset.

But I’ve trained the network with the official ultralytics repository, not with the forked from hailo. I will try with the forked one in the next days and see if that makes any difference.

1 Like

…class 2 is rather hard and needs a few epochs…

Should be: “…class 2 is rather hard and needs some more epochs…”

give pls your results of trying on forked from hailo in future

Unfortunately, it didn’t change anything. I still get the NegativeSlopeExponentNonFixable error :frowning:

Any idea how to solve the issue? I am a little bit lost here…

the same mate the same

I am wondering what “normalized dataset” means. The calibration dataset is the same images used for validation. But not normalized, so the values are 0-255. The hailomz tutorial mentions that the normalization of the calibration images is taken care of by the hailomz script…

Do we have to process/normalize the calibration images first?