Model HEF FPS parameters on differents devices

Hi everyone,

I have a question to do, I’ve made tests with my HEF model on my Raspberry Pi 5, my Ubuntu Desktop (Workstation Dell Precision 3660) and my NPU module Hailo8L using a camera in the follow inference code:

import degirum as dg
import degirum_tools
import degirum_tools.streams as dgstreams

import os

# Limitar o número de threads para operações paralelas
#os.environ["OMP_NUM_THREADS"] = "2"
#os.environ["OPENBLAS_NUM_THREADS"] = "2"
#os.environ["MKL_NUM_THREADS"] = "2"
#os.environ["VECLIB_MAXIMUM_THREADS"] = "2"
#os.environ["NUMEXPR_NUM_THREADS"] = "2"


inference_host_address = "@local"
# inference_host_address = "@local"

# choose zoo_url

#desktop
zoo_url = "/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/best_22-01_01_i640.json"

#raspberry
#zoo_url = "/home/pi/Desktop/hailo-rpi5-examples/resources/best_22-01_01_i640.json"

# set token
#token = degirum_tools.get_token()
token = ''
# token = '' # leave empty for local inference

#webcams
source1 = 0 # Webcam index
source2 = 2 # Webcam index

#videos desktop 
source3 = "/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut1.avi"  # Video file
source4 = "/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut2.avi"  # Video file
source5 = "/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/11_29_2024_11_50_00.avi"  # Video file

#videos rasp
#source3 = "/home/pi/Desktop/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut1.avi"  # Video file
#source4 = "/home/pi/Desktop/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut2.avi"  # Video file
#source5 = "/home/pi/Desktop/hailo-rpi5-examples/resources/11_29_2024_11_50_00.avi"  # Video file

# Define the configurations for video file and webcam
configurations = [
    {
        "model_name": "best_22-01_01_i640",
        "source":source5,
        "display_name": "Video/Cam-1",
    },
    #{
    #    "model_name": "best_22-01_01_i640",
    #    "source": source2,
    #    "display_name": "Video/Cam-2",
    #},
]

# load models
models = [
    dg.load_model(cfg["model_name"], inference_host_address, zoo_url, token)
    for cfg in configurations
]

# define gizmos
sources = [dgstreams.VideoSourceGizmo(cfg["source"]) for cfg in configurations]
detectors = [dgstreams.AiSimpleGizmo(model) for model in models]
display = dgstreams.VideoDisplayGizmo(
    [cfg["display_name"] for cfg in configurations], show_ai_overlay=True, show_fps=True
)

# create pipeline
pipeline = (
    (source >> detector for source, detector in zip(sources, detectors)),
    (detector >> display[di] for di, detector in enumerate(detectors)),
)

# start composition
dgstreams.Composition(*pipeline).start()

In both devices works well my HEF model, but I noticed that my FPS metric did not exceed 30 FPS.

Well, my doubt is about this, someone knows why do my FPS not exceed 30 FPS or have some guess? Is it a NPU limitation or a metric in the convertion or a hardware limitation (rasp and desk) or idk what else?

Thank you!

Best regards

Hi @gabriel.freire
If you are using a camera, you are limited by the FPS of camera.

Hi @shashi
Thank you for your answer. I tried to set my camera FPS to 60 FPS when I reduce my resolution. I used this to config:

(venv) [gabriel@gabriel-Precision-3660 hailo-rpi5-examples]$ v4l2-ctl --list-devices
EMEET SmartCam S600: EMEET Smar (usb-0000:00:14.0-1):
	/dev/video0
	/dev/video1
	/dev/media0

(venv) [gabriel@gabriel-Precision-3660 hailo-rpi5-examples]$ v4l2-ctl --device=/dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'MJPG' (Motion-JPEG, compressed)
		Size: Discrete 3840x2160
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2560x1440
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x960
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1024x576
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 960x720
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 800x600
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
	[1]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.033s (30.000 fps)
(venv) [gabriel@gabriel-Precision-3660 hailo-rpi5-examples]$ v4l2-ctl --device=/dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=MJPG
(venv) [gabriel@gabriel-Precision-3660 hailo-rpi5-examples]$ v4l2-ctl --device=/dev/video0 --set-parm=60
Frame rate set to 60.000 fps
(venv) [gabriel@gabriel-Precision-3660 hailo-rpi5-examples]$ v4l2-ctl --device=/dev/video0 --get-fmt-video
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'MJPG' (Motion-JPEG)
	Field             : None
	Bytes per Line    : 0
	Size Image        : 2073600
	Colorspace        : sRGB
	Transfer Function : Rec. 709
	YCbCr/HSV Encoding: ITU-R 601
	Quantization      : Default (maps to Full Range)
	Flags             : 

But when I run the code I only receveid 30 FPS yet, as you can see in the image:

So, I understood that the camera FPS is the limiter, I agree with this, but in my situation, as you can see, that isn’t the case.

So, about this, do you have another guess? How I spoke in the first question, maybe is it a NPU limitation or a metric in the convertion (when I converted my model) or a hardware limitation or idk what else?

Thank you for your attention!

Best regards

Hi @gabriel.freire
If you can share your hef file, we can benchmark and provide you a script that shows the max FPS the model can achieve.

Hi @shashi

Thanks for you support. I made a repository to put my model, so this is the link with my model, the config archieves and my inference code:

https://github.com/freiregc/hef_model.git

If you need something else please tell me, thank you!

Best regards

Hi @gabriel.freire
Thanks. We will analyze and keep you posted.