Hi everyone
I have a Desktop with Ubuntu 22.04.5 LTS and now I’m with a problem to run my inference code with my yolo model. I’ve used in my raspberry the follow code:
import degirum as dg
import degirum_tools
import degirum_tools.streams as dgstreams
import os
Limitar o número de threads para operações paralelas
os.environ[“OMP_NUM_THREADS”] = “2”
os.environ[“OPENBLAS_NUM_THREADS”] = “2”
os.environ[“MKL_NUM_THREADS”] = “2”
os.environ[“VECLIB_MAXIMUM_THREADS”] = “2”
os.environ[“NUMEXPR_NUM_THREADS”] = “2”
inference_host_address = “@local”
inference_host_address = “@local”
choose zoo_url
#desktop
zoo_url = “/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/best_22-01_01_i640.json”
#raspberry
#zoo_url = “/home/pi/Desktop/hailo-rpi5-examples/resources/best_22-01_01_i640.json”
set token
#token = degirum_tools.get_token()
token = ‘’
token = ‘’ # leave empty for local inference
#webcams
source1 = 0 # Webcam index
source2 = 2 # Webcam index
#videos desktop
source3 = “/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut1.avi” # Video file
source4 = “/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut2.avi” # Video file
source5 = “/home/gabriel/Desktop/rasp/hailo-rpi5-examples/resources/11_29_2024_11_50_00.avi” # Video file
#videos rasp
#source3 = “/home/pi/Desktop/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut1.avi” # Video file
#source4 = “/home/pi/Desktop/hailo-rpi5-examples/resources/11_29_2024_11_50_00_cut2.avi” # Video file
#source5 = “/home/pi/Desktop/hailo-rpi5-examples/resources/11_29_2024_11_50_00.avi” # Video file
Define the configurations for video file and webcam
configurations = [
{
“model_name”: “best_22-01_01_i640”,
“source”:source5,
“display_name”: “Video/Cam-1”,
},
#{
# “model_name”: “best_22-01_01_i640”,
# “source”: source2,
# “display_name”: “Video/Cam-2”,
#},
]
load models
models = [
dg.load_model(cfg[“model_name”], inference_host_address, zoo_url, token)
for cfg in configurations
]
define gizmos
sources = [dgstreams.VideoSourceGizmo(cfg[“source”]) for cfg in configurations]
detectors = [dgstreams.AiSimpleGizmo(model) for model in models]
display = dgstreams.VideoDisplayGizmo(
[cfg[“display_name”] for cfg in configurations], show_ai_overlay=True, show_fps=True
)
create pipeline
pipeline = (
(source >> detector for source, detector in zip(sources, detectors)),
(detector >> display[di] for di, detector in enumerate(detectors)),
)
start composition
dgstreams.Composition(*pipeline).start()
With the raspberry setup works very well, I used the hailo-rpi5-examples env and I add the the DeGirum lib. But when I try in my Desktop (with the respectives flags) not works. I’ve had created a new env on my desktop with the hailort lib and my env have this requirements:
(venv) gabriel@gabriel-Precision-3660:~/Desktop/rasp/hailo-rpi5-examples$ pip freeze
absl-py==2.1.0
annotated-types==0.4.0
anyio==4.8.0
apprise==1.9.2
argcomplete==3.5.3
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==3.0.0
astunparse==1.6.3
async-lru==2.0.4
attrs==25.1.0
babel==2.17.0
beautifulsoup4==4.13.3
bidict==0.23.1
bleach==6.2.0
cachetools==5.5.2
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
click==8.1.8
cloudpickle==3.1.1
coloredlogs==15.0.1
comm==0.2.2
contextlib2==21.6.0
contourpy==1.3.1
cycler==0.12.1
debugpy==1.8.12
decorator==5.1.1
defusedxml==0.7.1
degirum==0.15.0
degirum_tools==0.16.4
disjoint_set==0.8.0
dm-tree==0.1.9
exceptiongroup==1.2.2
executing==2.2.0
fastjsonschema==2.21.1
ffmpegcv==0.3.16
flatbuffers==23.5.26
fonttools==4.56.0
fqdn==1.5.1
future==1.0.0
gast==0.4.0
google-auth==2.38.0
google-auth-oauthlib==1.0.0
google-pasta==0.2.0
grpcio==1.70.0
h11==0.14.0
h5py==3.13.0
hailo-dataflow-compiler @ file:///home/gabriel/Downloads/hailo_dataflow_compiler-3.30.0-py3-none-linux_x86_64.whl
hailort @ file:///home/gabriel/Downloads/hailort-4.20.0-cp310-cp310-linux_x86_64.whl
httpcore==1.0.7
httpx==0.28.1
humanfriendly==10.0
idna==3.10
importlib_metadata==8.6.1
ipykernel==6.29.5
ipython==8.32.0
ipywidgets==8.1.5
isoduration==20.11.0
jax==0.4.13
jaxlib==0.4.13
jedi==0.19.2
Jinja2==3.1.5
json5==0.10.0
jsonpointer==3.0.0
jsonref==1.1.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter==1.1.1
jupyter-console==6.6.3
jupyter-events==0.12.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.3.5
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_widgets==3.0.13
keras==2.12.0
kiwisolver==1.4.8
libclang==18.1.1
loguru==0.6.0
Mako==1.2.4
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.10.0
matplotlib-inline==0.1.7
mdurl==0.1.2
mistune==3.1.2
ml_dtypes==0.5.1
mpmath==1.3.0
msgpack==1.1.0
msgpack-numpy==0.4.7.1
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
netaddr==1.3.0
netifaces==0.11.0
networkx==2.8.8
notebook==7.3.2
notebook_shim==0.2.4
numpy==1.23.3
oauthlib==3.2.2
onnx==1.16.0
onnx-tf==1.10.0
onnxruntime==1.18.0
onnxsim==0.4.36
opencv-python==4.10.0.84
opt_einsum==3.4.0
overrides==7.7.0
packaging==24.2
pafy==0.5.5
pandas==2.2.3
pandocfilters==1.5.1
parso==0.8.4
pbr==6.1.1
pexpect==4.9.0
Pillow==9.4.0
platformdirs==4.3.6
prettytable==3.5.0
prometheus_client==0.21.1
prompt_toolkit==3.0.50
protobuf==3.20.3
psutil==7.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pwlf==2.5.0
py==1.11.0
py-cpuinfo==9.0.0
pyasn1==0.6.1
pyasn1_modules==0.4.1
pycocotools==2.0.8
pycparser==2.22
pydantic==2.0.2
pydantic_core==2.1.2
Pygments==2.19.1
pygraphviz==1.14
pyparsing==2.4.7
pyseccomp==0.1.2
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-engineio==4.11.2
python-json-logger==3.2.1
python-socketio==5.12.1
pytz==2025.1
PyYAML==6.0.2
pyzmq==26.2.1
referencing==0.36.2
requests==2.32.3
requests-oauthlib==2.0.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.9.4
rpds-py==0.23.1
rsa==4.9
safetensors==0.5.2
scipy==1.13.1
Send2Trash==1.8.3
setproctitle==1.3.4
simple-websocket==1.1.0
six==1.17.0
sniffio==1.3.1
soupsieve==2.6
stack-data==0.6.3
sympy==1.13.3
tabulate==0.9.0
tensorboard==2.12.3
tensorboard-data-server==0.7.2
tensorflow==2.12.0
tensorflow-addons==0.23.0
tensorflow-estimator==2.12.0
tensorflow-io-gcs-filesystem==0.37.1
tensorflow-probability==0.20.1
termcolor==2.5.0
terminado==0.18.1
testresources==2.0.1
tflite==2.10.0
tinycss2==1.4.0
tomli==2.2.1
tornado==6.4.2
tqdm==4.67.1
traitlets==5.14.3
typeguard==2.13.3
types-python-dateutil==2.9.0.20241206
types-PyYAML==6.0.12.20241230
types-requests==2.32.0.20241016
typing_extensions==4.12.2
tzdata==2025.1
uri-template==1.3.0
urllib3==2.3.0
verboselogs==1.7
wcwidth==0.2.13
webcolors==24.11.1
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.1.3
widgetsnbextension==4.0.13
wrapt==1.14.1
wsproto==1.2.0
youtube-dl==2020.12.2
zipp==3.21.0
My computer is recognize the Hailo device with the DeGirum command:
(venv) gabriel@gabriel-Precision-3660:~/Desktop/rasp/hailo-rpi5-examples$ degirum sys-info
Devices:
HAILORT/HAILO8L:
- ‘@Index’: 0
Board Name: Hailo-8
Device Architecture: HAILO8L
Firmware Version: 4.20.0
ID: ‘0000:02:00.0’
Part Number: HM21LB1C2LAE
Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
Serial Number: “HLDDLBB242501477\x10HM21LB1C2LAE”
But when I trying run my code, I’ve received this:
(venv) gabriel@gabriel-Precision-3660:~/Desktop/rasp/hailo-rpi5-examples$ python3 basic_pipelines/two_cameras_inference.py
free(): invalid size
Aborted (core dumped)
I try search help online, on chats (chatgpt and deepseek) but anything suggested worked. Someone can help me?
Thank you!
Best regards.