Has anyone already worked with quantizing SiamRPN for Hailo-8? Are there perhaps any guides available?
Hey @An_ti11
we don’t officially support SiamRPN, but here’s a basic guide to get started with any custom model on Hailo-8:
Setup and Compile SiamRPN
- 
Export your PyTorch model to ONNX 
 Usetorch.onnx.export()with proper input/output names.
- 
Translate the ONNX model using Hailo SDK 
 UseClientRunner().translate_onnx_model(...)with the correct input shape.
- 
Apply custom quantization parameters - Use quantization_param(...)for precision per layer.
- Set quantization_groupsandforce_range_outas needed.
 
- Use 
- 
Run model calibration 
 Use representative data and callmodel_optimization_config(...).
- 
Run model compilation 
 Use ClientRunner().compile()
Tips for SiamRPN
- Ensure cross-correlation (group conv) layers export correctly to ONNX.
- Watch for reshape/transpose issues – they often break shape inference.
- Quantization is tricky in dense and upsampling layers – tune carefully.