Best approach to run inference with 2 hefs

Hello, I want to run inference using 2 hefs.

I do not want to run them simultaneously, but I want to run them in sequence.

Hef1 → always runs first and one time only
Hef2 → runs multiple times, it could be 3, 4, 10, 15 times, depending on the value it returns

The important thing is that I always have to wait for the result of each run to start the next one, so:

Hef1 → Hef2 (iteration1) → Hef2 (iteration2) → …etc… → done

What’s the best way to do so? What are the suggested API? I guess it’s not very useful to run async API in this case, is it?

Do you have any example with this kind of pipeline?

Thank you in advance

Hi @user155
We developed PySDK, a wrapper over HailoRT, to make this type of pipelines easy to code. You can see an example of a flow similar to what you outlined here: Face Detection + Gender Classification: Pipelining two models on Hailo devices