Multiple Models Inference on Hailo 8

In “hailo tutorial” there are these following 2 tutorials:
“HRT_2_Inference_Tutorial_Multi_Process_Service.ipynb”
“HRT_4_Async_Inference_Multiple_Models_Tutorial.ipynb”

What are the different between these 2 tutorials?

Another question is, let’s say I’m trying to synchronous perform object detection with yolov10 and depth estimation with sc_depth_v2 on one Hailo-8 chip. What are the general procedures I should follow?

Hey @ShingHin.Hung,

The HRT_2 tutorial demonstrates multi-process handling with a single model, while the HRT tutorial focuses on running multiple models asynchronously.

If you want to use two models on the same chip and input, you can refer to the scheduler example: Scheduler Example.

I am following the tutorial and it gives following error:

Could you please explain the error.

I propose to implement multi-model usage using C and C++ APIs provided by hailort.

According to using experience for HRT 4.14 version and earlier version for hailoRT python. I’ve had a few unknown issues.

I am trying to run “HRT_4_Async_Inference_Multiple_Models_Tutorial.ipynb”.

In the tutorial it mentions that it requires to “Run the notebook inside the Python virtual environment: source hailo_virtualenv/bin/activate

I would like to know if I could also run the code in docker container?

@ShingHin.Hung you are using PI OS or Ubuntu. as i am not able to import the hailo_platform in the tutorial script

I am using Ubuntu. I never have that problem. Make sure you have choose the correct Python interpreter.