Hi @AbnerDC
Can you please clarify what you mean by use OPENCV instead?
Thanks for your soon answer
Sure, previously in some other project I use cv2 to draw the bbox and putting age and gender at the top of the face box. I want to achieve the same behavior using degirum
Hi @AbnerDC
Our PySDK also uses opencv to draw the image overlay. The inference result object has all the information you need to write your own code for overlay. Maybe I am not fully understanding your use case. Can you please explain a bit more?
We are looking to identify the facial emotion like deepface does where it returns an emotion like angry, fear, neutral, sad, happy, surprise. Is the best method to try and convert the deepfake model to the Hailo executable format or would you recommend another path?
Hi @user116
Welcome to the Hailo community. In our experience, the emotion model from deepface does not have very high accuracy. Did you check that model on your use case and see if its accuracy is sufficient? We are working on training a better model for this use case.
Hello Shashi. I have been using that model without the AI acceleration. Our project is for arts and entertainment so the accuracy is less important than many other use cases. However, I would be happy with the ability to detect smiles with relative accuracy instead of general emotions. Iām still researching the facial landmarks but Iām not aware of any easy way to detect smiles with the Hailo models. Do you have any recommendations for smile detection?
Hi @user116
Understood. We will let you know as soon as we have this model ready in our model zoo. Will be glad to get your feedback on its usability for your application.
@user116 note that the CLIP network already we have can support a wide range of classifications. From my experience it can definitely recognize emotions. It is not very accurate but if itās for artistic purposes it might even open new options for you.
Thanks Giladn. I installed the CLIP project as directed by the GIT readme and the demo opens up the UI Controls. I used those to create a .json file with the text āA photo of a: person smilingā. Iām running on a Raspberry pi5 8GB so I used --disable-runtime-prompts and specified the --json-path however I get a segmentation fault upon running the clip_application.py. Is the raspberry pi5 the problem? Any suggestions?
Hi @user116
We now have a yolo based emotions classification model. Please see for example: hailo_examples/examples/010_emotion_recognition.ipynb at main Ā· DeGirum/hailo_examples
Many thx for making this guide!
I have followiing error when run code of Stage 3: Extracting Embeddings
DegirumException: Failed to perform model ādegirum/models_hailort/arcface_mobilefacenetā112x112_quant_hailort_hailo8l_1ā inference: Unable to open connection to cloud server hub.degirum.com: One or more namespaces failed to connect
Hi @Simon_Ho
Are you trying to run the inference locally or using our cloud?
I am using dg cloud with the dg token set in nano env
Hi @Simon_Ho
Can you please run the code snippet below and let me know the full error message?
Please paste your actual token in the code instead of reading it from env. We should eliminate that as a source of error.
import degirum as dg
# Load the face recognition model
face_rec_model = dg.load_model(
model_name="arcface_mobilefacenet--112x112_quant_hailort_hailo8l_1",
inference_host_address='@cloud',
zoo_url='degirum/models_hailort',
token='<your_token_here>',
)
@shashi I have been looking to test face detection on a pi 5 with an AI HAT+ (hailo8) with a Pi Camera using the degirum PySDK, and while the it runs ok and i see the preview window showing the detected face, it does not appear to be using the NPU. I was wondering if you have any advice or if I am misreading the situation. Thank you in advance for any help you can provide.
Here are the details:
Program output:
(venv_pi_demo) conorroche@picam1:~/pi-demo $ python degirum_simple.py
[0:04:08.369897242] [1983] INFO Camera camera_manager.cpp:326 libcamera v0.5.0+59-d83ff0a4
[0:04:08.376901719] [2010] INFO RPI pisp.cpp:720 libpisp version v1.2.1 981977ff21f3 29-04-2025 (14:13:50)
[0:04:08.386291847] [2010] INFO RPI pisp.cpp:1179 Registered camera /base/axi/pcie@1000120000/rp1/i2c@88000/ov5647@36 to CFE device /dev/media3 and ISP device /dev/media0 using PiSP variant BCM2712_D0
[0:04:08.389187776] [1983] INFO Camera camera.cpp:1205 configuring streams: (0) 640x480-RGB888 (1) 640x480-GBRG_PISP_COMP1
[0:04:08.389298297] [2010] INFO RPI pisp.cpp:1483 Sensor: /base/axi/pcie@1000120000/rp1/i2c@88000/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected CFE format: 640x480-PC1g
the cpu spikes to ~120%
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1983 conorro+ 20 0 3407552 340944 173504 S 119.9 4.1 0:29.42 python
774 root 20 0 1106032 24160 18128 S 30.9 0.3 0:06.77 hailort_service
2073 conorro+ 20 0 324496 63808 34400 S 24.6 0.8 0:05.13 python3
2067 conorro+ 20 0 324496 64720 34864 S 21.9 0.8 0:05.01 python3
and hailortcli monitor shows:
Monitor did not retrieve any files. This occurs when there is no application currently running.
If this is not the case, verify that environment variable 'HAILO_MONITOR' is set to 1.
I have set the HAILO_MONITOR env var in the terminal i am running the python script from.
(venv_pi_demo) conorroche@picam1:~/pi-demo $ echo $HAILO_MONITOR
1
(venv_pi_demo) conorroche@picam1:~/pi-demo $ python degirum_simple.py
If i run the model directly
(venv_pi_demo) conorroche@picam1:~/pi-demo $ hailortcli run .models/scrfd_10g--640x640_quant_hailort_hailo8_1/scrfd_10g--640x640_quant_hailort_hailo8_1.hef -t 30
Running streaming inference (.models/scrfd_10g--640x640_quant_hailort_hailo8_1/scrfd_10g--640x640_quant_hailort_hailo8_1.hef):
Transform data: true
Type: auto
Quantized: true
Network scrfd_10g/scrfd_10g: 23% | 2127 | FPS: 303.54 | ETA: 00:00:2
Then cpu is fine:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32645 conorro+ 20 0 2143712 36160 26496 S 47.8 0.4 0:11.92 hailortcli
And I can see it is using the NPU as expected:
conorroche@picam1:~ $ hailortcli monitor
Device ID Utilization (%) Architecture
----------------------------------------------------------------------------------------------------------------------------------------------------------------
0001:01:00.0 100.0 HAILO8
Model Utilization (%) FPS PID
----------------------------------------------------------------------------------------------------------------------------------------------------------------
scrfd_10g 100.0 303.6 32689
Model Stream Direction Frames Queue
Avg Max Min Capacity
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
scrfd_10g scrfd_10g/input_layer1 H2D 0.50 1 0 4
scrfd_10g scrfd_10g/conv58 D2H 0.50 1 0 4
scrfd_10g scrfd_10g/conv56 D2H 0.50 1 0 4
scrfd_10g scrfd_10g/conv51 D2H 0.50 1 0 4
scrfd_10g scrfd_10g/conv50 D2H 0.50 1 0 4
scrfd_10g scrfd_10g/conv57 D2H 0.50 1 0 4
Degirum Sys Info:
(venv_pi_demo) conorroche@picam1:~/pi-demo $ degirum sys-info
Devices:
HAILORT/HAILO8:
- '@Index': 0
Board Name: Hailo-8
Device Architecture: HAILO8
Firmware Version: 4.20.0
ID: '0001:01:00.0'
Part Number: ''
Product Name: ''
Serial Number: ''
N2X/CPU:
- '@Index': 0
TFLITE/CPU:
- '@Index': 0
- '@Index': 1
Software Version: 0.16.2
The python code i am running is as follows:
import degirum as dg
import degirum_tools
from picamera2 import Picamera2
import matplotlib.pyplot as plt
import numpy as np
import cv2
# Define a frame generator: a function that yields frames from the Picamera2
def frame_generator():
picam2 = Picamera2()
# Configure the camera (optional: set the resolution or other settings)
picam2.configure(picam2.create_preview_configuration({'format': 'RGB888'}))
# Start the camera
picam2.start()
try:
while True:
# Capture a frame as a numpy array
frame = picam2.capture_array()
# Yield the frame
yield frame
finally:
picam2.stop() # Stop the camera when the generator is closed
# Specify the model name
face_det_model_name = "scrfd_10g--640x640_quant_hailort_hailo8_1"
# Specify the inference host address
#inference_host_address = "@cloud" # Use "@cloud" for cloud inference
inference_host_address = "@local" # Use "@local" for local inference
# Specify the zoo_url
#zoo_url = "degirum/models_hailort"
zoo_url = ".models" # For local model files
token = '' # Leave empty for local inference
# Load the face detection model
face_det_model = dg.load_model(
model_name=face_det_model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token,
overlay_color=(0, 255, 0) # Green color for bounding boxes
)
# Process the video stream by AI model using model.predict_batch():
for result in face_det_model.predict_batch(frame_generator()):
# Display the frame with AI annotations in a window named 'AI Inference'
cv2.imshow("AI Inference", result.image_overlay)
# Process GUI events and break the loop if 'q' key was pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# Destroy any remaining OpenCV windows after the loop finishes
cv2.destroyAllWindows()
Hi @Conor_Roche
Welcome to the Hailo community. If the program is running and you can see the detected faces, the model is definitely running on the Hailo NPU. The models use a .hef file that can only run on hailo devices and cannot run on CPU. The reason you see CPU usage spike is because the application code contains other CPU intensive tasks (input fetch from camera, output image display) which are not present when you run the hailortcli. Our team will look into why NPU usage does not show up while running the app and we will keep you posted. But we are 100% confident that the model runs on Hailo. So, it is just some setup issue.
Hello, the only problem here is that the monitor isnāt getting any information from the application.
Normally, you need to update the monitor variable in the process of the running application.
(excerpt from HailoRT User Guide)
- In the appās process, set environment variable: HAILO_MONITOR=1
- Run the inference application
However, since from your output, it looks like your multi-process service is running:
(excerpt from HailoRT User Guide)
Setting environment variables when working with HailoRT service
The environment variables for HailoRT service are defined in the file /etc/default/hailort_service:
To change an environment variable value, do the following:
- Change the desired environment variable in /etc/default/hailort_service
- Reload systemd unit files by running: sudo systemctl daemon-reload
- Enable and start service: sudo systemctl enable --now hailort.service
Try this and let me know if it helped.
@Stephan_Sokolov @shashi Thank you for clarifying and for your assistance. Yes it now shows as expected in the hailortcli monitor after setting HAILO_MONITOR=1 in /etc/default/hailort_service
[Service]
HAILORT_LOGGER_PATH="/var/log/hailo"
HAILO_MONITOR=1
HAILO_TRACE=0
HAILO_TRACE_TIME_IN_SECONDS_BOUNDED_DUMP=0
HAILO_TRACE_SIZE_IN_KB_BOUNDED_DUMP=0
HAILO_TRACE_PATH=""
Hi @Simon_Ho
Checking to see if you still have issues loading the arcface model.
Face recognition model provide āarcface_mobilefacenetā ,but this version isānt i want. Is there any chance to provide āarcface_r50ā