Hello,
I have a AI Kit attached to a cm5 , which is and arm64 architecture, how can I get it to work with custom model created. As I can see the binaries listed for x86 and not for arm 64?
Welcome to the Hailo Community!
To convert a model from ONNX/TFLite into the Hailo Executable Format (HEF), you need an x86-based machine running Ubuntu 20.04 or 22.04 with a lot of RAM (Model Build Computer). For certain optimizations, a high-performance NVIDIA GPU is recommended.
Inference can run on a wide range of hosts, including a Raspberry Pi 5 / Compute Module 5 with an AI HAT+ or Hailo-8 M.2 module.
I have custom models for the AI jobs, where in I have the .pt file and have to use the Hailo AI hat for the compute module.. How do I make it happen for now.. ?
I do not have RAM 16gb, neither do I have an x86 based machines.. I only have a Mac M1 Pro. Can you please help on how I can make it happen?