Deployment of Dinov2 model from torch hub

Dear Hailo-Team,
I am currently looking into the deployment of the Dinov2_vits14 model provided by torch hub (torch.hub.load(“facebookresearch/dinov2”, ‘dinov2_vits14’). Converting the .onnx-file into the .har-representation worked out as well as the quantization of the model. However, the compliation step into the.hef-file fails during the execution of the following code snippet: #compile into .hef-format based on .har-file
onnx_model_name = ‘dino’
quantized_model_har_path = f"{onnx_model_name}_quantized_model.har"
runner = ClientRunner(har=quantized_model_har_path)
print(‘Start compling into hef’)
hef = runner.compile()
with open(quantized_model_har_path, “wb”) as f:
f.write(hef)
The error was as follows:


Do you have any advice to circumvent/fix this error? Thanks in advance!

Hey @vicy.mangold ,

Welcome to the Hailo Communtiy!

Here are key steps to resolve the issue:

  1. Verify Parameter Names: The error suggests the parameter name “normalization_nudge_layer_normalization” is invalid. Check the API documentation for the correct names.
  2. Use Valid Names: Refer to the error message for valid options like “normalization1”, “normalization2”, “conv1”, or “normalization_nudge_layer_normalization1”. Update your code accordingly.
  3. Update Naming Convention: If using older or incorrect formats, replace them with the recommended valid names (e.g., use “normalization1” instead of “normalization_nudge_layer_normalization”).
  4. Set Optimization Level: Ensure the compiler_optimization_level is set to “max” by including performance_params(compiler_optimization_level=max) in your model script.
  5. Consider Trade-offs: While setting optimization to “max” can improve inference performance, it might increase compilation time. Choose the optimization level based on your performance and time requirements.

By aligning parameter names with the valid options and properly configuring the optimization level, you can resolve this issue effectively.