I downgraded to kernel 6.12.20 following this steps and it worked
Hi @AbnerDC , our team is investigating the problem. Will keep you posted
@AbnerDC , can you share with us the output of the following command on the system which fails with an error:
sudo dmesg | grep hailo
Sure:
uname -a
Linux raspberry 6.12.20+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.20-1+rpt1~bpo12+1 (2025-03-19) aarch64 GNU/Linux
sudo dmesg | grep hailo
[ 3.127022] hailo: Init module. driver version 4.20.0
[ 3.127107] hailo 0001:01:00.0: Probing on: 1e60:2864…
[ 3.127111] hailo 0001:01:00.0: Probing: Allocate memory for device extension, 13184
[ 3.127127] hailo 0001:01:00.0: enabling device (0000 → 0002)
[ 3.127131] hailo 0001:01:00.0: Probing: Device enabled
[ 3.127144] hailo 0001:01:00.0: Probing: mapped bar 0 - 000000006654a3ba 16384
[ 3.127148] hailo 0001:01:00.0: Probing: mapped bar 2 - 00000000fac5f077 4096
[ 3.127151] hailo 0001:01:00.0: Probing: mapped bar 4 - 00000000d01388fa 16384
[ 3.127154] hailo 0001:01:00.0: Probing: Force setting max_desc_page_size to 4096 (recommended value is 16384)
[ 3.127161] hailo 0001:01:00.0: Probing: Enabled 64 bit dma
[ 3.127163] hailo 0001:01:00.0: Probing: Using userspace allocated vdma buffers
[ 3.127166] hailo 0001:01:00.0: Disabling ASPM L0s
[ 3.127169] hailo 0001:01:00.0: Successfully disabled ASPM L0s
[ 3.127248] hailo 0001:01:00.0: Writing file hailo/hailo8_fw.bin
[ 3.215433] hailo 0001:01:00.0: File hailo/hailo8_fw.bin written successfully
[ 3.215440] hailo 0001:01:00.0: Writing file hailo/hailo8_board_cfg.bin
[ 3.215465] Failed to write file hailo/hailo8_board_cfg.bin
[ 3.215467] hailo 0001:01:00.0: File hailo/hailo8_board_cfg.bin written successfully
[ 3.215468] hailo 0001:01:00.0: Writing file hailo/hailo8_fw_cfg.bin
[ 3.215476] Failed to write file hailo/hailo8_fw_cfg.bin
[ 3.215478] hailo 0001:01:00.0: File hailo/hailo8_fw_cfg.bin written successfully
[ 3.304654] hailo 0001:01:00.0: NNC Firmware loaded successfully
[ 3.304660] hailo 0001:01:00.0: FW loaded, took 177 ms
[ 3.316804] hailo 0001:01:00.0: Probing: Added board 1e60-2864, /dev/hailo0
Executing on device: 0001:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.20.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8L
Serial Number: HLDDLBB244602799
Part Number: HM21LB1C2LAE
Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
With the latest R.PI image, I followed the install instructions in github and have some feedback to help get the jupyter examples to work OK.
- Error 1 - 014_person_reid
- I had to execute the following command to install “slkearn”
pip install -U scikit-learn
- I had to execute the following command to install “slkearn”
- Error 2 - General error message
- qt.qpa.plugin: Could not find the Qt platform plugin “wayland”
- Error 3 - Overall problem
- Raspberry Pi hangs after executing examples within the environment. The only way to fix was to reboot
Hopefully this is enough info?
Hi @AbnerDC
We confirmed that downgrading to kernel 6.12.20 indeed fixes the problem. We tracked down this error to scatter-gather list creation in the kernel driver, which for some reason intermittently fails on 6.12.25. So the workaround is to switch to kernel 6.12.20 until Hailo guys or RPi team will fix it.
Hi @Curlywurly
Thanks for bringing these to our notice.
Error 1: We will fix the requirements.txt so that scikit-learn is included
Error 3: Can you find your kernel version? Lately, people have complained that kernel 6.12.25 is causing crashes. Please let us know if your issue is unrelated.
Error 2: Is it a warning or an error that is crashing the program?
hello! i want to know if these models that you are using are available on the .hef format (the scrfd and the other)
Hi @constanza_bo
Welcome to the Hailo community. Yes, the models are available in hef format. You can see hailo-ai/hailo_model_zoo: The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment.
Hi
I want face detection and recognition pipeline in c++.
Where can I get it from ?
Hi @diz
Welcome to the Hailo community. Using PySDK, we do not have a ready to use c++ pipeline for this use case. We have a cppSDK but a lot of non ML model related code (cropping, database searching etc) needs to be implemented in c++. Maybe there are others who implemented the whole logic in c++.
@shashi
Thank you for the reply Shashi.
If anyone can help me for the setup of c++ pipeline for face detection and recognition using TAPPAS, pls let me know.
Great guide, thank you. But I’ve gotten stuck with this error in working thru the steps:
degirum.exceptions.DegirumException: Failed to perform model 'arcface_mobilefacenet--112x112_quant_hailort_hailo8l_1' inference: [ERROR]Incorrect value of parameter
Device type HAILORT/HAILO8L is not supported by the system
This is the code I have so far:
import os
import degirum as dg
import degirum_tools
import matplotlib.pyplot
import cv2
import numpy
BASE_DIRECTORY = '/home/pi/hailo/hailo_examples/test'
def SaveOverlays(images, title="Images", figsize=(15, 5), outfilename='overlays.jpg'):
"""
Display a list of images in a single row using Matplotlib.
Parameters:
- images (list): List of images (NumPy arrays) to display.
- title (str): Title for the plot.
- figsize (tuple): Size of the figure.
"""
num_images = len(images)
fig, axes = matplotlib.pyplot.subplots(1, num_images, figsize=figsize)
if num_images == 1:
axes = [axes] # Make it iterable for a single image
for ax, image in zip(axes, images):
image_rgb = image[:, :, ::-1] # Convert BGR to RGB
ax.imshow(image_rgb)
ax.axis('off')
fig.suptitle(title, fontsize=16)
matplotlib.pyplot.tight_layout()
fig.savefig(os.path.join(BASE_DIRECTORY, outfilename))
def align_and_crop(img, landmarks, image_size=112):
"""
Align and crop the face from the image based on the given landmarks.
Args:
img (numpy.ndarray): The full image (not the cropped bounding box). This image will be transformed.
landmarks (List[numpy.ndarray]): List of 5 keypoints (landmarks) as (x, y) coordinates. These keypoints typically include the eyes, nose, and mouth.
image_size (int, optional): The size to which the image should be resized. Defaults to 112. It is typically either 112 or 128 for face recognition models.
Returns:
Tuple[numpy.ndarray, numpy.ndarray]: The aligned face image and the transformation matrix.
"""
# Define the reference keypoints used in ArcFace model, based on a typical facial landmark set.
_arcface_ref_kps = numpy.array(
[
[38.2946, 51.6963], # Left eye
[73.5318, 51.5014], # Right eye
[56.0252, 71.7366], # Nose
[41.5493, 92.3655], # Left mouth corner
[70.7299, 92.2041], # Right mouth corner
],
dtype=numpy.float32,
)
# Ensure the input landmarks have exactly 5 points (as expected for face alignment)
assert len(landmarks) == 5
# Validate that image_size is divisible by either 112 or 128 (common image sizes for face recognition models)
assert image_size % 112 == 0 or image_size % 128 == 0
# Adjust the scaling factor (ratio) based on the desired image size (112 or 128)
if image_size % 112 == 0:
ratio = float(image_size) / 112.0
diff_x = 0 # No horizontal shift for 112 scaling
else:
ratio = float(image_size) / 128.0
diff_x = 8.0 * ratio # Horizontal shift for 128 scaling
# Apply the scaling and shifting to the reference keypoints
dst = _arcface_ref_kps * ratio
dst[:, 0] += diff_x # Apply the horizontal shift
# Estimate the similarity transformation matrix to align the landmarks with the reference keypoints
M, inliers = cv2.estimateAffinePartial2D(numpy.array(landmarks), dst, ransacReprojThreshold=1000)
assert numpy.all(inliers == True)
# Apply the affine transformation to the input image to align the face
aligned_img = cv2.warpAffine(img, M, (image_size, image_size), borderValue=0.0)
return aligned_img, M
def main():
# Specify the model name
#face_det_model_name = 'scrfd_10g--640x640_quant_hailort_hailo8l_1'
# face_det_model_name = 'scrfd_2.5g--640x640_quant_hailort_hailo8l_1'
# face_det_model_name = 'scrfd_500m--640x640_quant_hailort_hailo8l_1'
face_det_model_name = 'yolov8n_relu6_widerface_kpts--640x640_quant_hailort_hailo8_1'
# face_det_model_name = 'retinaface_mobilenet--736x1280_quant_hailort_hailo8l_1'
inference_host_address = '@local' # Use '@local' for local inference
zoo_url = '/home/pi/hailo/hailo_examples/models' # For local model files
image_source = '/home/pi/hailo/hailo_examples/assets/Friends1.jpg'
token = '' # Leave empty for local inference
# Load the face detection model
face_det_model = dg.load_model(
model_name=face_det_model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token,
overlay_color=(0, 255, 0) # Green color for bounding boxes
)
# Run the inference
detected_faces = face_det_model(image_source)
# Lists to store aligned faces
aligned_faces = []
# Process each detection result
for n, face in enumerate(detected_faces.results):
# Extract landmarks and align the face
landmarks = [landmark["landmark"] for landmark in face["landmarks"]]
aligned_face, _ = align_and_crop(detected_faces.image, landmarks) # Align and crop face
cv2.imwrite(os.path.join(BASE_DIRECTORY, str(n)+'.jpg'), aligned_face)
aligned_faces.append(aligned_face)
# Display aligned faces
SaveOverlays(aligned_faces, title="Aligned Faces", figsize=(10, 5))
print(detected_faces)
# Face recognition model name
face_rec_model_name = "arcface_mobilefacenet--112x112_quant_hailort_hailo8l_1"
# Load the face recognition model
face_rec_model = dg.load_model(
model_name=face_rec_model_name,
inference_host_address=inference_host_address,
zoo_url=zoo_url,
token=token
)
# Process each detected face
for face in detected_faces.results:
# Extract landmarks and align the face
landmarks = [landmark["landmark"] for landmark in face["landmarks"]]
aligned_face, _ = align_and_crop(detected_faces.image, landmarks) # Align and crop face
face_embedding = face_rec_model(aligned_face).results[0]["data"][0]
if __name__ == "__main__":
main()
The basic rpicam pipelines work (i.e.: hailo-rpi5-examples/basic_pipelines/depth.py, hailo-rpi5-examples/basic_pipelines/detection.py), and the card seems installed correctly.
(venv_hailo_rpi5_examples) pi@raspberry:~/hailo/hailo_examples $ hailortcli fw-control identify
Executing on device: 0001:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.20.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: <N/A>
Part Number: <N/A>
Product Name: <N/A>
(venv_hailo_rpi5_examples) pi@raspberry:~/hailo/hailo_examples $ pip list
Package Version
------------------------- ---------------
anyio 4.9.0
apprise 1.9.3
argcomplete 2.0.0
argon2-cffi 25.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 3.0.0
async-lru 2.0.5
attrs 25.3.0
babel 2.17.0
beautifulsoup4 4.13.4
bidict 0.23.1
bleach 6.2.0
certifi 2022.9.24
cffi 1.17.1
chardet 5.1.0
charset-normalizer 3.0.1
click 8.2.1
colorzero 2.0
comm 0.2.2
contextlib2 0.6.0.post1
contourpy 1.3.2
cycler 0.12.1
debugpy 1.8.14
decorator 5.2.1
defusedxml 0.7.1
degirum 0.17.0
degirum_cli 0.2.0
degirum_tools 0.18.0
distlib 0.3.6
distro 1.8.0
executing 2.2.0
fastjsonschema 2.21.1
ffmpeg-python 0.2.0
ffmpegcv 0.3.18
filelock 3.9.0
fonttools 4.58.4
fqdn 1.5.1
future 0.18.2
gpiozero 2.0.1
h11 0.16.0
hailo-apps-infra 0.2.0
hailort 4.20.0
httpcore 1.0.9
httpx 0.28.1
idna 3.3
ipykernel 6.29.5
ipython 9.3.0
ipython_pygments_lexers 1.1.1
isoduration 20.11.0
jedi 0.19.2
Jinja2 3.1.6
json5 0.12.0
jsonpointer 3.0.0
jsonschema 4.24.0
jsonschema-specifications 2025.4.1
jupyter_client 8.6.3
jupyter_core 5.8.1
jupyter-events 0.12.0
jupyter-lsp 2.2.5
jupyter_server 2.16.0
jupyter_server_terminals 0.5.3
jupyterlab 4.4.3
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
kiwisolver 1.4.8
lgpio 0.2.2.0
Mako 1.2.4.dev0
Markdown 3.4.1
MarkupSafe 2.1.2
matplotlib 3.10.3
matplotlib-inline 0.1.7
meson 1.5.1
mistune 3.1.3
msgpack 1.1.1
msgpack-numpy 0.4.7.1
nbclient 0.10.2
nbconvert 7.16.6
nbformat 5.10.4
nest_asyncio 1.6.0
netaddr 0.8.0
netifaces 0.11.0
notebook 7.4.3
notebook_shim 0.2.4
numpy 1.26.4
oauthlib 3.3.1
opencv-python 4.11.0.86
overrides 7.7.0
packaging 25.0
pafy 0.5.5
pandocfilters 1.5.1
parso 0.8.4
pexpect 4.9.0
pigpio 1.78
pillow 10.4.0
pip 23.0.1
platformdirs 2.6.0
prometheus_client 0.22.1
prompt_toolkit 3.0.51
psutil 7.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
pycocotools 2.0.10
pycparser 2.22
pycryptodomex 3.11.0
Pygments 2.14.0
PyGObject 3.42.2
pyparsing 3.2.3
pyseccomp 0.1.2
python-apt 2.6.0
python-dateutil 2.9.0.post0
python-dotenv 1.1.1
python-engineio 4.12.2
python-json-logger 3.3.0
python-socketio 5.13.0
PyYAML 6.0
pyzmq 27.0.0
referencing 0.36.2
requests 2.32.4
requests-oauthlib 2.0.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.25.1
rpi-lgpio 0.6
scipy 1.16.0
Send2Trash 1.8.3
setproctitle 1.3.6
setuptools 66.1.1
simple-websocket 1.1.0
six 1.16.0
smbus2 0.4.2
sniffio 1.3.1
soupsieve 2.7
spidev 3.5
ssh-import-id 5.10
stack-data 0.6.3
terminado 0.18.1
tinycss2 1.4.0
toml 0.10.2
tornado 6.5.1
traitlets 5.14.3
types-python-dateutil 2.9.0.20250516
types-PyYAML 6.0.12.20250516
types-requests 2.32.4.20250611
typing_extensions 4.14.0
uri-template 1.3.0
urllib3 2.5.0
virtualenv 20.17.1+ds
wcwidth 0.2.13
webcolors 24.11.1
webencodings 0.5.1
websocket-client 1.8.0
wheel 0.38.4
wsproto 1.2.0
youtube-dl 2020.12.2
(venv_hailo_rpi5_examples) pi@raspberry:~/hailo/hailo_examples
hi @shashi ,
I have implemented face recognition in c++. I have used arcface_mobilefacenet and r50 from hailo model zoo , both models are giving 512 embeddings but when I print the embeddings only 128 are non-zero while rest are zero -
1.89963e+28 6.33485e+36 1.2598e+33 1.23294e+36 3.38448e+32 -4.83178e-35 3.43621e+23 3.22473e+35 1.95305e+31 -4.48485e-35 -2.93948e-30 2.26368e+34 2.02423e+31 nan 3.17049e+38 8.33898e+34 1.99853e+37 -1.33781e-26 -1.14261e-38 -1.42467e-29 2.11708e+34 -3.16148e-36 3.2769e+35 5.09002e+30 -1.71593e-37 -1.23054e-29 2.99723e+32 -8.29845e-28 1.95733e+25 3.64652e+35 5.70901e+36 8.40387e+34 7.96497e+34 -1.11656e-35 1.23062e+27 9.44639e+34 1.34333e+33 -2.93256e-36 5.42115e+36 -1.04192e-38 2.11701e+34 -1.14355e-38 2.29892e+28 -1.36356e-38 -4.7204e-38 1.95939e+34 -1.86238e-37 8.19221e+37 5.29315e+33 -1.94388e-28 -8.57934e-34 4.76721e+27 -3.07554e-33 3.10082e+29 3.66395e+32 4.59433e+27 3.16385e+32 2.11751e+34 3.0647e+26 -1.35302e-32 5.79306e+36 3.28337e+32 -2.85046e-36 1.25039e+30 -1.94625e-37 2.05627e+34 3.46163e+32 -8.39335e-34 -1.87557e-31 3.33368e+32 3.1473e+35 1.28024e+33 -4.54589e-38 1.9958e+19 8.19129e+25 -2.90636e-33 1.26513e+33 -7.45079e-37 -6.82917e-37 8.65085e+31 -1.06053e-38 -1.65635e-37 -4.50707e-38 -7.41922e-37 3.12419e+32 2.21863e+37 8.45481e+37 -7.47902e-34 -4.58349e-38 8.52035e+31 -3.05089e-30 3.33722e+35 -2.07197e-31 -1.88424e-34 2.04892e+34 3.53541e+32 4.72418e+30 2.43457e+37 1.00015e+38 8.12433e+37 -7.66839e-37 2.08474e+34 1.41464e+33 -2.9215e-36 -7.90783e-37 -2.87021e-33 -7.47924e-37 2.04008e+34 -1.28392e-35 -2.96769e-36 7.70623e+34 3.30284e+38 -5.713e-26 3.00389e+23 -1.92206e-31 5.04326e+24 -4.34363e-38 5.91847e+36 3.56205e+32 3.11211e+32 -2.92053e-36 -2.24139e-31 1.23961e+33 3.30281e+35 3.16089e+35 -1.16849e-35 8.80185e+37 5.51444e+27 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
This is my output of embeddings.
hailo_status run_face_recognition_async(AsyncModelInfer &rec_model)
{
int i = 0;
while (true)
{
AlignedFaceItem item;
if (aligned_faces_queue->size() == 0)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
break;
}
if (!aligned_faces_queue->pop(item))
{
cout << "is it breaking >>>>>>. " << endl;
break;
}
if (item.aligned_face.empty())
{
std::cerr << "Skipping empty aligned face!" << std::endl;
continue;
}
cv::Mat normalized;
item.aligned_face.convertTo(item.aligned_face, CV_32F, 1.0 / 128.0, -1.0);
rec_model.infer(std::make_shared<cv::Mat>(item.aligned_face),
[item](const hailort::AsyncInferCompletionInfo &info,
const auto &output_data_and_infos)
{
auto &output = output_data_and_infos[0];
float *embedding = reinterpret_cast<float *>(output.first);
// Example: print first 5 dims
std::cout << "Embedding: ";
// int count = 0;
for (int i = 0; i < 128; ++i)
{
// if(embedding[i] != 0)count++;
std::cout << embedding[i] << " ";
}
std::cout << std::endl;
});
i++;
}
return HAILO_SUCCESS;
}
This is my inference code.
And do let me know why I am also getting so large values in this.
Thanks in advance.
Hi @David_Wertheimer
when you check the model JSON file, do you see this?
"DEVICE": [
{
"DeviceType": "HAILO8L",
"RuntimeAgent": "HAILORT",
"ThreadPackSize": 6,
"SupportedDeviceTypes": "HAILORT/HAILO8L, HAILORT/HAILO8"
}
It’s possible that you simply need to change the DeviceType field from HAILO8L to HAILO8.
Thank you - indeed, the problem was just a typo in the model_name that I was using.
face_det_model_name = 'yolov8n_relu6_widerface_kpts--640x640_quant_hailort_hailo8_1'was missing the l - it should have been face_det_model_name = 'yolov8n_relu6_widerface_kpts--640x640_quant_hailort_hailo8l_1'. That resolved my issue.
Hi @diz
As mentioned earlier, we cannot really help with your c++ code as our working implementation is a python package (built on top of c++ runtime). That being said, here is my best guess on why you are getting such values: you are re interpreting unit8 values as float32 values. You should instead use the zero point and scale values to convert the unit8 outputs to float values using the formula: float=scale*(quant-zero_point). Hope this helps.
I Installed the R.Pi OS from scratch today - With the latest “Bookworm” and it all worked OK - no OS hanging
N.B. This was for a R.PI 5 (8gb) AI kit,
N.B.uname -r gives 6.12.34+rpt-rpi-2712
Thanks for the article, I was able to refer the article and tried building a face recognition from a video stream but my video seams to have a problem with the color correction, it has a blue tint. I’m using a rpi -module 3 cam. When i run the ‘rpicam-hello -t 1000000’, I see no issues its working fine. When it comes to face recognition it is doing fine. My idea is to run it like a CCTV. any help? Thank you in advance.
Here is the code for the reference.
def simple_pi_frame_generator():
“”“Camera frame generator”“”
if not PICAMERA_AVAILABLE:
raise RuntimeError(“Picamera2 not available”)
time.sleep(0.5)
global _global_camera_ref
picam2 = Picamera2()
_global_camera_ref = picam2
try:
config = picam2.create_preview_configuration({
'format': 'RGB888',
'size': (1920, 1080)
})
picam2.configure(config)
controls = {
"AwbEnable": True,
"AwbMode": 1,
"ColourGains": (1.0, 1.0),
"Brightness": 0.0,
"Contrast": 1.0,
"Saturation": 1.0,
}
picam2.start()
picam2.set_controls(controls)
time.sleep(3.0)
logging.info("✅ Pi camera started")
while True:
frame = picam2.capture_array()
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
yield frame_bgr
except Exception as e:
logging.error(f"Error in frame generator: {e}")
raise
finally:
try:
picam2.stop()
logging.info("📷 Pi camera stopped")
_global_camera_ref = None
except:
pass
Hi @Freddy_M
Welcome to the Hailo community. Can you share the code snippet where you are seeing the output? Your code does convert RGB to BGR but it is hard to say if this is the issue until we know how you are visualizing the output.