Hi @kimi_Jhong
Welcome to the Hailo community. arcface_r50 is available in the Hailo model zoo: hailo_model_zoo/docs/public_models/HAILO8/HAILO8_face_recognition.rst at master · hailo-ai/hailo_model_zoo. You can simply replace the model and the code will work.
Hi shashi
I want Face recognition model provide “arcface_r50” , it’s used in “degirum、degirum_tools” not in the Hailo model zoo
Hi Shashi,
I put the actual token in the code replace <your_token_here> by a token from https://hub.degirum.com/tokens, but still have the same error…
Face recognition model name
face_rec_model_name = “arcface_mobilefacenet–112x112_quant_hailort_hailo8l_1”
Load the face recognition model
face_rec_model = dg.load_model(
model_name=face_rec_model_name,
#inference_host_address=inference_host_address,
#zoo_url=zoo_url,
#token=token
inference_host_address=‘@cloud’,
zoo_url=‘degirum/models_hailort’,
token=‘<your_token_here>’,
DegirumException: Failed to perform model ‘degirum/models_hailort/arcface_mobilefacenet–112x112_quant_hailort_hailo8l_1’ inference: Unable to open connection to cloud server hub.degirum.com: One or more namespaces failed to connect
Hi @Simon_Ho
Thanks for the details. We will investigate. Is this happening only for this model?
Thanks for your response. I can load model and run the code for example face_det_model_name = “scrfd_10g–640x640_quant_hailort_hailo8l_1” in stage 1.
Connection error came from when I enter into stage 3 to load arcface_mobilefacenet–112x112_quant_hailort_hailo8l_1
for the time being, I can run the code on rpi5 by using local model.
I would like to run on a video stream on rpi5 with rpi cam. How can I do it?
Hi @Simon_Ho
See hailo_examples/examples/016_custom_video_source.ipynb at main · DeGirum/hailo_examples for an illustration on how to run inference on rpi camera.
Hi shashi
“DeGirum model zoo” is not provide “arcface_r50”, now it noly support “arcface_mobilefacenet–112x112_quant_hailort_hailo8l_1” , Is there any chance to provide “arcface_r50”. i’m try to replace it, but the result is different to the “WebFace600K_pfc_model.onnx”
Hi @kimi_Jhong
We will add the model and keep you posted.
Hi @shashi
I can get the face detection example working fine (scrfd_10g–640x640_quant_hailort_hailo8_1) but as soon as I use the face recognition model (arcface_mobilefacenet–112x112_quant_hailort_hailo8_1) the hailort service crashes/hangs after detecting faces once a few calls to face_rec_model.predict_batch are made. I reboot, verify the hailort service has started, run the application and the same thing happens. I have tried switching the code to using non batch mode along with a sleep between calls (thinking the rate of requests may be too high) but the same things happens after several seconds.
I was wondering if you had any suggestions.
I have the following setup:
Device: Raspberry Pi: 5 8GB
Python: 3.11.2
OS: Debian GNU/Linux 12 (bookworm)
HailoRT-CLI version 4.20.0
Firmware Version: 4.20.0
Device Architecture: HAILO8
I am running the models locally.
degirum_facerec.py script output:
conorroche@picam1:~/demo $ python degirum_facerec.py
2025-05-20 15:28:03,328 MainThread - INFO - Local inference with local zoo from '.models' dir
2025-05-20 15:28:03,755 MainThread - INFO - Local inference with local zoo from '.models' dir
[0:11:32.825316978] [1979] INFO Camera camera_manager.cpp:326 libcamera v0.5.0+59-d83ff0a4
[0:11:32.832215318] [2009] INFO RPI pisp.cpp:720 libpisp version v1.2.1 981977ff21f3 29-04-2025 (14:13:50)
[0:11:32.841280321] [2009] INFO RPI pisp.cpp:1179 Registered camera /base/axi/pcie@1000120000/rp1/i2c@88000/ov5647@36 to CFE device /dev/media0 and ISP device /dev/media1 using PiSP variant BCM2712_D0
2025-05-20 15:28:03,836 MainThread - INFO - Initialization successful.
2025-05-20 15:28:03,837 MainThread - INFO - Camera now open.
2025-05-20 15:28:03,840 MainThread - INFO - Camera configuration has been adjusted!
[0:11:32.846184516] [1979] INFO Camera camera.cpp:1205 configuring streams: (0) 640x480-RGB888 (1) 640x480-GBRG_PISP_COMP1
[0:11:32.846300867] [2009] INFO RPI pisp.cpp:1483 Sensor: /base/axi/pcie@1000120000/rp1/i2c@88000/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected CFE format: 640x480-PC1g
2025-05-20 15:28:03,841 MainThread - INFO - Configuration successful!
2025-05-20 15:28:03,916 MainThread - INFO - Camera started
[HailoRT] [critical] Executing pipeline terminate failed with status HAILO_RPC_FAILED(77)
2025-05-20 15:32:02,430 Thread-2 (thread_func) - INFO - Camera stopped
degirum.exceptions.DegirumException: [ERROR]Timeout detected
Timeout waiting for inference completion
dg_core_runtime.cpp: 89 [DG::CoreRuntimeAsync::wait]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/conorroche/demo/degirum_facerec.py", line 123, in <module>
for face, face_embedding in zip(result.results, face_rec_model.predict_batch(aligned_faces)):
File "/home/conorroche/demo/venv_intercept_alerter/lib/python3.11/site-packages/degirum/model.py", line 290, in predict_batch
for res in self._predict_impl(source):
File "/home/conorroche/demo/venv_intercept_alerter/lib/python3.11/site-packages/degirum/model.py", line 1230, in _predict_impl
raise DegirumException(msg) from saved_exception
degirum.exceptions.DegirumException: Failed to perform model 'arcface_mobilefacenet--112x112_quant_hailort_hailo8_1' inference: [ERROR]Timeout detected
Timeout waiting for inference completion
dg_core_runtime.cpp: 89 [DG::CoreRuntimeAsync::wait]
2025-05-20 15:32:02,832 MainThread - INFO - Camera closed successfully.
[HailoRT] [critical] Executing pipeline terminate failed with status HAILO_RPC_FAILED(77)
[HailoRT] [critical] Failed to finish_listener_thread in VDevice
[HailoRT] [critical] ConfiguredNetworkGroup_release failed with status: HAILO_RPC_FAILED(77)
Here is the output from hailort.log for degirum_facerec.py:
[2025-05-20 15:28:03.482] [1979] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:03.483] [1979] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:03.484] [1979] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:03.704] [1979] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:03.706] [1979] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:03.706] [1979] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:03.769] [1979] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:03.770] [1979] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:03.771] [1979] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:04.278] [1990] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:04.280] [1990] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:04.281] [1990] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:04.314] [1990] [HailoRT] [info] [vdevice.cpp:523] [create] Creating vdevice with params: device_count: 1, scheduling_algorithm: ROUND_ROBIN, multi_process_service: true
[2025-05-20 15:28:05.376] [1990] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:05.377] [1990] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:05.407] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: scrfd_10g
[2025-05-20 15:28:05.407] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: scrfd_10g
[2025-05-20 15:28:05.467] [1990] [HailoRT] [info] [infer_model.cpp:436] [configure] Configuring network group 'scrfd_10g' with params: batch size: 0, power mode: ULTRA_PERFORMANCE, latency: NONE
[2025-05-20 15:28:05.470] [1990] [HailoRT] [info] [multi_io_elements.cpp:756] [create] Created (AsyncHwEl)
[2025-05-20 15:28:05.471] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (EntryPushQEl0scrfd_10g/input_layer1 | timeout: 10s)
[2025-05-20 15:28:05.471] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl6AsyncHwEl)
[2025-05-20 15:28:05.472] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (PushQEl7AsyncHwEl | timeout: 10s)
[2025-05-20 15:28:05.472] [1990] [HailoRT] [info] [filter_elements.cpp:375] [create] Created (PostInferEl7AsyncHwEl | Reorder - src_order: NHCW, src_shape: (20, 20, 8), dst_order: NHWC, dst_shape: (20, 20, 8))
[2025-05-20 15:28:05.472] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0PostInferEl7AsyncHwEl)
[2025-05-20 15:28:05.472] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl3AsyncHwEl)
[2025-05-20 15:28:05.473] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (PushQEl4AsyncHwEl | timeout: 10s)
[2025-05-20 15:28:05.473] [1990] [HailoRT] [info] [filter_elements.cpp:375] [create] Created (PostInferEl4AsyncHwEl | Reorder - src_order: FCR, src_shape: (80, 80, 24), dst_order: FCR, dst_shape: (80, 80, 20))
[2025-05-20 15:28:05.473] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0PostInferEl4AsyncHwEl)
[2025-05-20 15:28:05.473] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl2AsyncHwEl)
[2025-05-20 15:28:05.473] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl1AsyncHwEl)
[2025-05-20 15:28:05.474] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (PushQEl8AsyncHwEl | timeout: 10s)
[2025-05-20 15:28:05.474] [1990] [HailoRT] [info] [filter_elements.cpp:375] [create] Created (PostInferEl8AsyncHwEl | Reorder - src_order: FCR, src_shape: (40, 40, 24), dst_order: FCR, dst_shape: (40, 40, 20))
[2025-05-20 15:28:05.474] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0PostInferEl8AsyncHwEl)
[2025-05-20 15:28:05.474] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (PushQEl5AsyncHwEl | timeout: 10s)
[2025-05-20 15:28:05.474] [1990] [HailoRT] [info] [filter_elements.cpp:375] [create] Created (PostInferEl5AsyncHwEl | Reorder - src_order: NHCW, src_shape: (20, 20, 2), dst_order: NHWC, dst_shape: (20, 20, 2))
[2025-05-20 15:28:05.474] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0PostInferEl5AsyncHwEl)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (PushQEl0AsyncHwEl | timeout: 10s)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [filter_elements.cpp:375] [create] Created (PostInferEl0AsyncHwEl | Reorder - src_order: FCR, src_shape: (20, 20, 24), dst_order: FCR, dst_shape: (20, 20, 20))
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0PostInferEl0AsyncHwEl)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] EntryPushQEl0scrfd_10g/input_layer1 | inputs: user | outputs: AsyncHwEl(running in thread_id: 2051)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] AsyncHwEl | inputs: EntryPushQEl0scrfd_10g/input_layer1[0] | outputs: PushQEl0AsyncHwEl LastAsyncEl1AsyncHwEl LastAsyncEl2AsyncHwEl LastAsyncEl3AsyncHwEl PushQEl4AsyncHwEl PushQEl5AsyncHwEl LastAsyncEl6AsyncHwEl PushQEl7AsyncHwEl PushQEl8AsyncHwEl
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PushQEl0AsyncHwEl | inputs: AsyncHwEl[0] | outputs: PostInferEl0AsyncHwEl(running in thread_id: 2056)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PostInferEl0AsyncHwEl | inputs: PushQEl0AsyncHwEl[0] | outputs: LastAsyncEl0PostInferEl0AsyncHwEl
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0PostInferEl0AsyncHwEl | inputs: PostInferEl0AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl1AsyncHwEl | inputs: AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl2AsyncHwEl | inputs: AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl3AsyncHwEl | inputs: AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PushQEl4AsyncHwEl | inputs: AsyncHwEl[0] | outputs: PostInferEl4AsyncHwEl(running in thread_id: 2053)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PostInferEl4AsyncHwEl | inputs: PushQEl4AsyncHwEl[0] | outputs: LastAsyncEl0PostInferEl4AsyncHwEl
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0PostInferEl4AsyncHwEl | inputs: PostInferEl4AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PushQEl5AsyncHwEl | inputs: AsyncHwEl[0] | outputs: PostInferEl5AsyncHwEl(running in thread_id: 2055)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PostInferEl5AsyncHwEl | inputs: PushQEl5AsyncHwEl[0] | outputs: LastAsyncEl0PostInferEl5AsyncHwEl
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0PostInferEl5AsyncHwEl | inputs: PostInferEl5AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl6AsyncHwEl | inputs: AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PushQEl7AsyncHwEl | inputs: AsyncHwEl[0] | outputs: PostInferEl7AsyncHwEl(running in thread_id: 2052)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PostInferEl7AsyncHwEl | inputs: PushQEl7AsyncHwEl[0] | outputs: LastAsyncEl0PostInferEl7AsyncHwEl
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0PostInferEl7AsyncHwEl | inputs: PostInferEl7AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PushQEl8AsyncHwEl | inputs: AsyncHwEl[0] | outputs: PostInferEl8AsyncHwEl(running in thread_id: 2054)
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] PostInferEl8AsyncHwEl | inputs: PushQEl8AsyncHwEl[0] | outputs: LastAsyncEl0PostInferEl8AsyncHwEl
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0PostInferEl8AsyncHwEl | inputs: PostInferEl8AsyncHwEl[0] | outputs: user
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: scrfd_10g
[2025-05-20 15:28:05.475] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: scrfd_10g
[2025-05-20 15:28:39.068] [1990] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:39.069] [1990] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:39.070] [1990] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:39.082] [1990] [HailoRT] [info] [vdevice.cpp:523] [create] Creating vdevice with params: device_count: 1, scheduling_algorithm: ROUND_ROBIN, multi_process_service: true
[2025-05-20 15:28:40.084] [1990] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.25+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.25-1+rpt1 (2025-04-30) aarch64
[2025-05-20 15:28:40.085] [1990] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-05-20 15:28:40.097] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: arcface_mobilefacenet
[2025-05-20 15:28:40.097] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: arcface_mobilefacenet
**[2025-05-20 15:28:40.122] [1990] [HailoRT] [info] [infer_model.cpp:436] [configure] Configuring network group 'arcface_mobilefacenet' with params: batch size: 0, power mode: ULTRA_PERFORMANCE, latency: NONE**
**[2025-05-20 15:28:40.123] [1990] [HailoRT] [info] [multi_io_elements.cpp:756] [create] Created (AsyncHwEl)**
**[2025-05-20 15:28:40.123] [1990] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (EntryPushQEl0arcface_mobilefacenet/input_layer1 | timeout: 10s)**
**[2025-05-20 15:28:40.124] [1990] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0AsyncHwEl)**
**[2025-05-20 15:28:40.124] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] EntryPushQEl0arcface_mobilefacenet/input_layer1 | inputs: user | outputs: AsyncHwEl(running in thread_id: 3271)**
**[2025-05-20 15:28:40.124] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] AsyncHwEl | inputs: EntryPushQEl0arcface_mobilefacenet/input_layer1[0] | outputs: LastAsyncEl0AsyncHwEl**
**[2025-05-20 15:28:40.124] [1990] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0AsyncHwEl | inputs: AsyncHwEl[0] | outputs: user**
**[2025-05-20 15:28:40.124] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: arcface_mobilefacenet**
**[2025-05-20 15:28:40.124] [1990] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: arcface_mobilefacenet**
**[2025-05-20 15:28:52.908] [2051] [HailoRT] [error] [hailort_rpc_client.cpp:1552]** [ConfiguredNetworkGroup_infer_async] CHECK_GRPC_STATUS failed with error code: 4.
[2025-05-20 15:28:52.908] [2051] [HailoRT] [warning] [hailort_rpc_client.cpp:1552] [ConfiguredNetworkGroup_infer_async] Make sure HailoRT service is enabled and active!
[2025-05-20 15:28:52.908] [2051] [HailoRT] [error] [network_group_client.cpp:697] [infer_async] CHECK_SUCCESS failed with status=HAILO_RPC_FAILED(77)
[2025-05-20 15:28:52.908] [2051] [HailoRT] [error] [pipeline_internal.cpp:26] [handle_non_recoverable_async_error] Non-recoverable Async Infer Pipeline error. status error code: HAILO_RPC_FAILED(77)
[2025-05-20 15:28:52.908] [2051] [HailoRT] [error] [async_infer_runner.cpp:88] [shutdown] Shutting down the pipeline with status HAILO_RPC_FAILED(77)
Here is the code for degirum_facerec.py:
import degirum as dg
import os
import numpy as np
import cv2
import time
import logging
from picamera2 import Picamera2
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(threadName)s - %(levelname)s - %(message)s')
# Define a frame generator: a function that yields frames from the Picamera2
def frame_generator():
picam2 = Picamera2()
# Configure the camera (optional: set the resolution or other settings)
picam2.configure(picam2.create_preview_configuration({'format': 'RGB888'}))
# Start the camera
picam2.start()
try:
while True:
# Capture a frame as a numpy array
frame = picam2.capture_array()
# Yield the frame
yield frame
finally:
picam2.stop() # Stop the camera when the generator is closed
def align_and_crop(img, landmarks, image_size=112):
"""
Align and crop the face from the image based on the given landmarks.
Args:
img (np.ndarray): The full image (not the cropped bounding box). This image will be transformed.
landmarks (List[np.ndarray]): List of 5 keypoints (landmarks) as (x, y) coordinates. These keypoints typically include the eyes, nose, and mouth.
image_size (int, optional): The size to which the image should be resized. Defaults to 112. It is typically either 112 or 128 for face recognition models.
Returns:
Tuple[np.ndarray, np.ndarray]: The aligned face image and the transformation matrix.
"""
# Define the reference keypoints used in ArcFace model, based on a typical facial landmark set.
_arcface_ref_kps = np.array(
[
[38.2946, 51.6963], # Left eye
[73.5318, 51.5014], # Right eye
[56.0252, 71.7366], # Nose
[41.5493, 92.3655], # Left mouth corner
[70.7299, 92.2041], # Right mouth corner
],
dtype=np.float32,
)
# Ensure the input landmarks have exactly 5 points (as expected for face alignment)
assert len(landmarks) == 5
# Validate that image_size is divisible by either 112 or 128 (common image sizes for face recognition models)
assert image_size % 112 == 0 or image_size % 128 == 0
# Adjust the scaling factor (ratio) based on the desired image size (112 or 128)
if image_size % 112 == 0:
ratio = float(image_size) / 112.0
diff_x = 0 # No horizontal shift for 112 scaling
else:
ratio = float(image_size) / 128.0
diff_x = 8.0 * ratio # Horizontal shift for 128 scaling
# Apply the scaling and shifting to the reference keypoints
dst = _arcface_ref_kps * ratio
dst[:, 0] += diff_x # Apply the horizontal shift
# Estimate the similarity transformation matrix to align the landmarks with the reference keypoints
M, inliers = cv2.estimateAffinePartial2D(np.array(landmarks), dst, ransacReprojThreshold=1000)
assert np.all(inliers == True)
# Apply the affine transformation to the input image to align the face
aligned_img = cv2.warpAffine(img, M, (image_size, image_size), borderValue=0.0)
return aligned_img, M
# Specify the model names
face_det_model_name = "scrfd_10g--640x640_quant_hailort_hailo8_1"
face_rec_model_name = "arcface_mobilefacenet--112x112_quant_hailort_hailo8_1"
# Specify the inference host address
#inference_host_address = "@cloud" # Use "@cloud" for cloud inference
inference_host_address = "@local" # Use "@local" for local inference
# Specify the zoo_url
#zoo_url = "degirum/models_hailort"
zoo_url = ".models" # For local model files
token = '' # Leave empty for local inference
# Load the face detection model
face_det_model = dg.load_model(
model_name=face_det_model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token,
overlay_color=(0, 255, 0) # Green color for bounding boxes
)
face_rec_model = dg.load_model(
model_name=face_rec_model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token,
overlay_color=(0, 255, 0) # Green color for bounding boxes
)
for result in face_det_model.predict_batch(frame_generator()):
aligned_faces = []
if result.results:
for face in result.results:
landmarks = [landmark["landmark"] for landmark in face["landmarks"]]
aligned_face, _ = align_and_crop(result.image, landmarks) # Align and crop face
aligned_faces.append(aligned_face)
for face, face_embedding in zip(result.results, face_rec_model.predict_batch(aligned_faces)):
embedding = face_embedding.results[0]["data"][0] # Extract embedding
cv2.imshow("AI Inference", result.image_overlay)
# Process GUI events and break the loop if 'q' key was pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# Destroy any remaining OpenCV windows after the loop finishes
#cv2.destroyAllWindows()
Here is the code for degirum_facerec2.py (same as above except using sep calls + sleep to face rec model vs predict batch):
num_embeddings = 0
for result in face_det_model.predict_batch(frame_generator()):
if result.results:
for face in result.results:
landmarks = [landmark["landmark"] for landmark in face["landmarks"]]
aligned_face, _ = align_and_crop(result.image, landmarks) # Align and crop face
face_embedding = face_rec_model(aligned_face).results[0]["data"][0]
time.sleep(0.1)
logging.info('generated embedding %d', num_embeddings);
num_embeddings = num_embeddings + 1
cv2.imshow("AI Inference", result.image_overlay)
Hi @Conor_Roche
We will take a look into this. Can you let us know the output of degirum sys-info
?
@Stephan_Sokolov Let us see if we can replicate this behavior.
Here is the output from degirum sys-info:
Devices:
HAILORT/HAILO8:
- '@Index': 0
Board Name: Hailo-8
Device Architecture: HAILO8
Firmware Version: 4.20.0
ID: '0001:01:00.0'
Part Number: ''
Product Name: ''
Serial Number: ''
N2X/CPU:
- '@Index': 0
TFLITE/CPU:
- '@Index': 0
- '@Index': 1
Software Version: 0.16.2
Hi @Conor_Roche
Can you downgrade pysdk to 0.16.1 and try? We noticed that some of our recent changes in 0.16.2 show some stability issues with multi-process service but only on raspberry pi systems. When we find a fix and release new version, you can again go to the latest version.
Hi @shashi , I downgraded as suggested and it looks to be running fine now, thank you again for your prompt assistance.
I got a missing libcamera module when I run “from picamera2 import Picamera2” in degirum_env.
It is the same after install python3-libcamera
ModuleNotFoundError: No module named ‘libcamera’
Hi @Simon_Ho
How did you install the libcamera? Using pip? or apt?
sudo apt install -y python3-libcamera
@Simon_Ho
I do not know much about how about libcamera behaves when installed system wide but needs to be accessed in the virtual environment ( I am assuming degirum_env refers to a virtual environment). Since libcamera cannot be installed via pip, you need to find a way to use system package. I asked ChatGPT and it suggested to create a virtual environment that uses the system packages as below:
python3 -m venv degirum_env --system-site-packages
. I have not tested this. I will check internally if my team can help and keep you posted.
Thx v m Shashi!
By the way, it was fine here when I run hailo-ai/hailo-rpi5-examples. Like python basic_pipelines/detection.py --input rpi
Hi @Simon_Ho,
When you install the Python package using sudo apt install -y python3-libcamera
it will be installed into system-wide Python packages directory. If you run your scripts in the virtual environment, then system packages are not available in that virtual environment unless you create it the way @shashi suggests: python3 -m venv degirum_env --system-site-packages
.
As for your comment that you can run hailo-ai/hailo-rpi5-examples, if you look detection.py
code you will find out, that it uses Hailo GStreamerDetectionApp
class from hailo_apps_infra
package, which handles the camera via GStreamer
infrastructure, not using picamera2
package. Since hailo_apps_infra
package is installed into your venv, it is imported OK.