I’m trying to convert a semantic segmentation model to HEF, which works if the num_classes > 1
but not if num_classes = 1
. Here’s a simple script to test:
import numpy as np
import onnx
import segmentation_models_pytorch as smp
import torch
import torchvision
import timm
from hailo_sdk_client import ClientRunner
from onnxsim import simplify
# WORKS
# model = smp.FPN("resnet18", classes=2)
# model = torchvision.models.resnet18(num_classes=1)
# model = timm.create_model("resnet18", pretrained=True, num_classes=1)
# DOESN'T WORK
model = smp.FPN("resnet18", classes=1)
torch.onnx.export(model, torch.randn(1, 3, 512, 512), "fpn.onnx")
onnx_model_simplified, check = simplify("fpn.onnx")
onnx.save_model(onnx_model_simplified, "fpn_simplified.onnx")
runner = ClientRunner(hw_arch="hailo8l")
runner.translate_onnx_model("fpn_simplified.onnx", "fpn")
runner.load_model_script("input_normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375])\n")
runner.optimize(np.random.randint(0, 255, (128, 512, 512, 3), dtype=np.uint8))
runner.compile()
The error I get:
[error] Mapping Failed (allocation time: 1m 20s)
Compiler could not find a valid partition to contexts. Most common error is: Automri finished with too many resources on context_2 with 24/88 failures.
[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: Compiler could not find a valid partition to contexts. Most common error is: Automri finished with too many resources on context_2 with 24/88 failures.
Like I mentioned, setting the num_classes = 2
does work. I tried this both on my laptop (Arch Linux) as well as the DFC docker container, both show the same error.