Hello, I have a custom PyTorch model exported with ONNX opset 11 with a sigmoid output.
However, when I am optimizing the model (finetune) with the latest version of Dataflow compiler, I am getting a very large dstill loss on the conv46_ne_activation layer instead of conv46 as the output layer. May I know the reason for this?
You need to normalize your input images during preprocessing (dividing by 255 for example) before running the .optimize() command
or
enable hardware normalization by including “normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])\n” in the alls script before running the .optimize() command