Poor Performance | Hailo8L Slower Than iGPU

The model does not fit into a single Hailo-8L and is therefore divided into multiple contexts. Run the following command to confirm:

hailortcli parse-hef model.hef

With multi context models you can increase the throughput at the cost of latency by running batches of images. That will reduce the context switching overhead. e.g.,

hailortcli run model.hef --batch-size 2
hailortcli run model.hef --batch-size 4
hailortcli run model.hef --batch-size 8

Let me know what results you get.

This post might be an interesting read as well.

Hailo Community - My model runs slower than expected