Pipeline Support for HailoRT 4.15(October 2023)

Hi Guys,

I am currently using a jupyter notebook Python Environment on my Ubuntu Core Snap based Embedded System, Hailo RT PCie driver 4.15, HailoRT 4.15, and the compatible python whl file with it

I found a older compiled hef model (yolov8n) online that was trained using the compatible dataflow compiler for my runtime and it works using the benchmark and parse commands

root@ctrlX-CORE:/var/snap/python-ai-toolkit/common/solutions/activeConfiguration/jupyterlab# hailortcli parse-hef /home/rexroot/yolov8ntest.hef
Architecture HEF was compiled for: HAILO8
Network group name: yolov8n, Single Context
Network name: yolov8n/yolov8n
VStream infos:
Input yolov8n/input_layer1 UINT8, NHWC(640x640x3)
Output yolov8n/conv41 UINT8, FCR(80x80x64)
Output yolov8n/conv42 UINT8, FCR(80x80x80)
Output yolov8n/conv52 UINT8, FCR(40x40x64)
Output yolov8n/conv53 UINT8, FCR(40x40x80)
Output yolov8n/conv62 UINT8, NHWC(20x20x64)
Output yolov8n/conv63 UINT8, FCR(20x20x80)
root@ctrlX-CORE:/var/snap/python-ai-toolkit/common/solutions/activeConfiguration/jupyterlab#

root@ctrlX-CORE:/var/snap/python-ai-toolkit/common/solutions/activeConfiguration/jupyterlab# hailortcli benchmark /home/rexroot/yolov8ntest.hef
Starting Measurements…
Measuring FPS in hw_only mode
Network yolov8n/yolov8n: 100% | 3050 | FPS: 203.30 | ETA: 00:00:00
Measuring FPS and Power in streaming mode
[HailoRT] [warning] Using the overcurrent protection dvm for power measurement will disable the overcurrent protection.
If only taking one measurement, the protection will resume automatically.
If doing continuous measurement, to enable overcurrent protection again you have to stop the power measurement on this dvm.
Network yolov8n/yolov8n: 100% | 3053 | FPS: 203.48 | ETA: 00:00:00
Measuring HW Latency
Network yolov8n/yolov8n: 100% | 1629 | HW Latency: 6.46 ms | ETA: 00:00:00

=======
Summary

FPS (hw_only) = 203.307
(streaming) = 203.489
Latency (hw) = 6.45791 ms
Device 0000:01:00.0:
Power in streaming mode (average) = 1.38702 W
(max) = 1.39247 W
root@ctrlX-CORE:/var/snap/python-ai-toolkit/common/solutions/activeConfiguration/jupyterlab#

root@ctrlX-CORE:/var/snap/python-ai-toolkit/common/solutions/activeConfiguration/jupyterlab# hailortcli run /home/rexroot/yolov8ntest.hef
Running streaming inference (/home/rexroot/yolov8ntest.hef):
Transform data: true
Type: auto
Quantized: true
Network yolov8n/yolov8n: 100% | 1017 | FPS: 203.30 | ETA: 00:00:00

Inference result:
Network group: yolov8n
Frames count: 1017
FPS: 203.32
Send Rate: 1998.67 Mbit/s
Recv Rate: 1967.44 Mbit/s

So i assume that i have the right runtime environment and model setup but im always running into errors when im trying to create a pipeline that uses a webcam, all the github repositories are a bit hard to navigate because of different versions and different postprocessing, different options in model zoo, tappas, appframework repos, i need some help in finding the right source for my specific runtime, i use this for the input frames :

import cv2
from flask import Flask, Response

---------- Camera Setup ----------

cam = cv2.VideoCapture(“/dev/video0”, cv2.CAP_V4L2)

and im using flask for a web server to act as a sink for the output video that should be overlayed with the bounding boxes

Do you know where exactly i can find the relevant code examples for inference using the old hailo_platform module and for postprocessing the yolov8n model etc. ,

That would be really helpful

Hey @arinjay.agrawal ,

For youre version i would reommend using Hailo-Application-Code-Examples/runtime/hailo-8/python/object_detection at main · hailo-ai/Hailo-Application-Code-Examples · GitHub
But changing the python API to fit the version you are using.