I was testing my hailo ai kit on RPI5. I was able to test the setup using hailo run ‘model_name.hef’. But when I go beyond one model I get the error that no physical devices available to run. But this can be resolved using run2 on hailortcli but not on python hailo.
Also I could not find any documentation for python api where multiple models can be run on hailo chip, This is possible using run2 for testing but for deployment I need a proper setup. Is there anything I am missing in the docs.
To run multiple models on the Hailo-8, we use the scheduling interface. For each HEF file, we create a separate HailoAsyncInference class and run them using a scheduler.
By creating instances of this class for each HEF and running them with the HailoSchedulingAlgorithm of your choice, you can effectively run two or more models on the chip simultaneously.