Hello!
I have 2 AI-accelerators Hailo-8(13 TOPS) and Hailo-8L(26 TOPS). I decided to compare them on my device Rockchip Rk3588s 1 PCI-e.
I tested with YOLOv8n, which was trained and compiled with hailo model zoo. Command to compile onnx to hef:
hailomz compile --ckpt yolov8n_4classes_hailo.onnx --calib-path …/out_images --yaml yolov8.yaml
There was no mention of an accelerator version(Hailo8 or 8L). Model fits both devices and hailortcli run command shows same fps for models:
hailortcli run output:
Running streaming inference (models/angel/yolov8n_4classes_hailo.hef):
Transform data: true
Type: auto
Quantized: true
Network yolov8n/yolov8n: 100% | 5379 | FPS: 179.26 | ETA: 00:00:00
Is it ok, that fps is the same on devices(13 and 26 TOPS)? Or maybe I made some mistakes during the export to hef. What is difference in model inference with Hailo-8 and Hailo-8L?
It sounds like the model was compiled for the Hailo-8L and then run on both devices, which could explain why you’re seeing the same FPS on both the Hailo-8 and Hailo-8L. Each model should ideally be compiled specifically for its target architecture, as the Hailo-8 and Hailo-8L differ significantly in computing power. Additionally, performance could be affected by the PCIe configuration. If you’re using only one PCIe lane, it may limit the performance to the capacity of that single lane.