Dear Hailo Community / Support Team,
I’m trying to compile the Swin-based classifier from Hugging Face as referenced in the Hailo Model Zoo config.
To reproduce the error see the attached zip https://artifacts.iris-sensing.com/web/client/pubshares/Ktht5AhekvsTBxcUAwsyUn/download:
$ python3 run_hf.py
$ hailo parser onnx swin-tiny-patch4-window7-224_single.onnx --hw-arch hailo8
I downloaded the source model you refer to, but the ONNX I export differs from the ONNX provided by Hailo (see attached onnx_diff.png). Because of this mismatch, the compilation fails (see attached error.log).
Could you please share the exact recipe you used to generate the “compilable” ONNX for this network—specifically:
the Hugging Face model identifier / exact revision (commit/tag)
preprocessing / input assumptions (size, normalization, layout)
export steps (PyTorch → ONNX), including opset, dynamic axes settings, and any graph simplifications
any patches or custom modifications applied prior to export
the exact command(s) used to compile the ONNX into a .hef
If there’s a script or a reference export notebook used for the Model Zoo ONNX, sharing that would also solve the issue.
Thanks in advance,
Ruben
KlausK
February 6, 2026, 10:08am
2
We replicated the issue and asked our R&D colleagues to have a look into it.
Nadav
February 8, 2026, 12:49pm
3
Hi @Ruben_Khachaturyan ,
Please try the below, works for me on DFC 3.33.0. The trick was using opset=15 and simplifying the ONNX:
from PIL import Image
import requests
import torch
import onnx
from onnx import external_data_helper
from onnxsim import simplify
from transformers import AutoImageProcessor, AutoModelForImageClassification
opset_version = 15
onnx_path = f"swin-tiny-patch4-window7-224_{opset_version}.onnx"
# ====== LOAD MODEL
model = AutoModelForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model.eval()
# ====== EXPORT TO ONNX
dummy = torch.randn(1, 3, 224, 224)
torch.onnx.export(
model, dummy, onnx_path,
export_params=True,
training=torch.onnx.TrainingMode.PRESERVE,
input_names=["input"],
output_names=["output"],
opset_version=opset_version,
do_constant_folding=False,
dynamo=False
)
# ====== SIMPLIFY ONNX
onnx_model = onnx.load(onnx_path)
# Provide input shape hints so onnxsim can reason well about shapes
simplified_model, check = simplify(
onnx_model,
input_shapes={"input": [1, 3, 224, 224]}
)
if not check:
raise RuntimeError("ONNX simplification check failed")
onnx.save(simplified_model, onnx_path)
# ====== (OPTIONAL) Convert external data -> embedded
# Only needed if your export produced external tensor files and you want everything in one .onnx
m = onnx.load(onnx_path, load_external_data=True)
external_data_helper.convert_model_from_external_data(m)
onnx.save(m, onnx_path)
print(f"✅ Exported + simplified ONNX saved to: {onnx_path}")
1 Like
Dear Nadav, it totally worked! Thank you very much!
1 Like