A Comprehensive Guide to Building a Face Recognition System

Hi @user83
Can you please clarify what you mean by use OPENCV instead?

Thanks for your soon answer
Sure, previously in some other project I use cv2 to draw the bbox and putting age and gender at the top of the face box. I want to achieve the same behavior using degirum

Hi @user83
Our PySDK also uses opencv to draw the image overlay. The inference result object has all the information you need to write your own code for overlay. Maybe I am not fully understanding your use case. Can you please explain a bit more?

1 Like

We are looking to identify the facial emotion like deepface does where it returns an emotion like angry, fear, neutral, sad, happy, surprise. Is the best method to try and convert the deepfake model to the Hailo executable format or would you recommend another path?

Hi @user116
Welcome to the Hailo community. In our experience, the emotion model from deepface does not have very high accuracy. Did you check that model on your use case and see if its accuracy is sufficient? We are working on training a better model for this use case.

Hello Shashi. I have been using that model without the AI acceleration. Our project is for arts and entertainment so the accuracy is less important than many other use cases. However, I would be happy with the ability to detect smiles with relative accuracy instead of general emotions. I’m still researching the facial landmarks but I’m not aware of any easy way to detect smiles with the Hailo models. Do you have any recommendations for smile detection?

Hi @user116
Understood. We will let you know as soon as we have this model ready in our model zoo. Will be glad to get your feedback on its usability for your application.

1 Like

@user116 note that the CLIP network already we have can support a wide range of classifications. From my experience it can definitely recognize emotions. It is not very accurate but if it’s for artistic purposes it might even open new options for you.

Thanks Giladn. I installed the CLIP project as directed by the GIT readme and the demo opens up the UI Controls. I used those to create a .json file with the text “A photo of a: person smiling”. I’m running on a Raspberry pi5 8GB so I used --disable-runtime-prompts and specified the --json-path however I get a segmentation fault upon running the clip_application.py. Is the raspberry pi5 the problem? Any suggestions?

Screenshot 2025-03-13 at 22.41.53

Hi @user116
We now have a yolo based emotions classification model. Please see for example: hailo_examples/examples/010_emotion_recognition.ipynb at main · DeGirum/hailo_examples