I share here with the community how I successfully ran multiple inferencing pipeline on multiple Hailo-8 chips on an NVIDIA Jetson Orin Nano.
With an external power supply, you can have an ordinary Orin Nano riding up to 144 TOPS. I used a Jetson Orin Nano 8GB (40 TOPS before going “super”) from Lanner. On this device, there is a PCIe x4 lane. As a comparison, Raspberry Pi 5 has only one 1 PCIe x1 lane. (Imagine at the toll station, 4 lanes vs 1 lane, 100 cars coming, which config moves faster?) With a M.2 to PCIe converter, I connected to Orin a 4-Hailo chip card Falcon H8C.
I only had 2 USB cameras on my desk and I found on Youtube some videos of metro in Taipei. You can see that 2 YOLOv8 models - used in the inferencing pipeline for object detection - are running on 2 separate Hailo chips.
Here the GPU on Jetson Orin is largely spared, ready for other tasks.
If you need more computing and inferencing capacity, there are Orin with 100 TOPS and PCIe card with 6-Hailo chips.