(venv_hailo_rpi5_examples) itaimizlish@raspberrypi:~/hailo-rpi5-examples $ python basic_pipelines/instance_segmentation.py --input rpi --hef /home/itaimizlish/Downloads/yolo_seg_kcg--640x640_quant_hailort_multidevice_1/yolo_seg_kcg--640x640_quant_hailort_multidevice_1.hef
Auto-detected Hailo architecture: hailo8l
Traceback (most recent call last):
File "/home/itaimizlish/hailo-rpi5-examples/basic_pipelines/instance_segmentation.py", line 135, in <module>
app = GStreamerInstanceSegmentationApp(app_callback, user_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/itaimizlish/hailo-rpi5-examples/venv_hailo_rpi5_examples/lib/python3.11/site-packages/hailo_apps_infra/instance_segmentation_pipeline.py", line 74, in __init__
raise ValueError("HEF version not supported, you will need to provide a config file")
ValueError: HEF version not supported, you will need to provide a config file```
I believe it has sommething to do with the yolo version, did required checks with the hailortcli and seems to be working fine
Hi @Itai_Mizlish
The yolov8n_seg
model needs a postprocessor to convert raw tensors to bboxes and segmentation masks. If you use DeGirum PySDK, we already integrated the postprocessor. You can see hailo_examples/examples/002_yolov8.ipynb at main · DeGirum/hailo_examples for an example.
still pretty confused on how to integrate it on my hailo-rpi5-examples
project on my device…
@Itai_Mizlish
What is your project? Run yolov8n_seg
on a video? Or something else?
running on rpi camera in real time using the hailo-rpi5-examples
under basic_pipelines/instance_segmentation.py
the current command is: ```
python basic_pipelines/instance_segmentation.py --input rpi --hef /home/itaimizlish/Downloads/yolo_seg_kcg–640x640_quant_hailort_multidevice_1/yolo_seg_kcg–640x640_quant_hailort_multidevice_1.hef
If you want to use pipelines from Hailo, you need to add postprocessors by yourselves. If you use PySDK (which is separate from hailo-rpi5-examples), postprocessor is already included. You can see hailo_examples/examples/016_custom_video_source.ipynb at main · DeGirum/hailo_examples for reference code on how to run on rpi camera.
import cv2
import degirum as dg
import numpy as np
from picamera2 import Picamera2
your_model_name = "yolo_seg_kcg--640x640_quant_hailort_multidevice_1"
your_host_address = "@cloud" # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "itaimizlish@gmail.com/KCG"
your_token = "$$$$$$$$$$$$$$$$$$$$$$$$$$$"
# Load the model
model = dg.load_model(
model_name = your_model_name,
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
# optional parameters, such as overlay_show_probabilities = True
)
def picamera2_frame_generator():
picam2 = Picamera2()
# Create a configuration dictionary manually
config = picam2.create_preview_configuration(main={"format": 'BGR888', "size": (640, 480)})
picam2.configure(config)
picam2.start()
try:
while True:
frame = picam2.capture_array()
yield frame
finally:
picam2.stop()
# Run inference and display
for result in model.predict_batch(picamera2_frame_generator()):
cv2.imshow("AI Inference PiCamera2", result.image_overlay)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
OKAY, Progress!
so this script kinda works…how could I get the model not to be dependant on the cloud?
import cv2
import degirum as dg
import numpy as np
from picamera2 import Picamera2
your_model_name = "yolo_seg_kcg--640x640_quant_hailort_multidevice_1"
your_host_address = "@local" # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "itaimizlish@gmail.com/KCG"
your_token = "$$$$$$$$$$$$$$$$$$$$$"
# Load the model
model = dg.load_model(
model_name = your_model_name,
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token,
device_type=["HAILORT/HAILO8L"]
# optional parameters, such as overlay_show_probabilities = True
)
def picamera2_frame_generator():
picam2 = Picamera2()
# Create a configuration dictionary manually
config = picam2.create_preview_configuration(main={"format": 'BGR888', "size": (640, 480)})
picam2.configure(config)
picam2.start()
try:
while True:
frame = picam2.capture_array()
yield frame
finally:
picam2.stop()
# Run inference and display
for result in model.predict_batch(picamera2_frame_generator()):
cv2.imshow("AI Inference PiCamera2", result.image_overlay)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
Modified a bit it works slightly faster but I want it to work offline if possible… Thanks!
@Itai_Mizlish
You can download the model folder from our AI Hub and switch the zoo from cloud to local. See hailo_examples/examples/001_quick_start.ipynb at main · DeGirum/hailo_examples for example usage. See User Guide 1 Hailo World: Running Your First Inference on a Hailo Device Using DeGirum PySDK for explanation.
import cv2
import degirum as dg
import numpy as np
from picamera2 import Picamera2
your_model_name = "yolo_seg_kcg--640x640_quant_hailort_multidevice_1"
your_host_address = "@local" # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "/home/itaimizlish/KCG"
# Load the model
model = dg.load_model(
model_name=your_model_name,
inference_host_address = your_host_address,
zoo_url = your_model_zoo
)
def picamera2_frame_generator():
picam2 = Picamera2()
# Create a configuration dictionary manually
config = picam2.create_preview_configuration(main={"format": 'BGR888', "size": (640, 480)})
picam2.configure(config)
picam2.start()
try:
while True:
frame = picam2.capture_array()
yield frame
finally:
picam2.stop()
# Run inference and display
for result in model.predict_batch(picamera2_frame_generator()):
cv2.imshow("AI Inference PiCamera2", result.image_overlay)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
ran this which seem fitting to what you sent and after starting the camera recieved:
[HailoRT] [critical] Executing pipeline terminate failed with status HAILO_RPC_FAILED(77)
Ok this works!
Nvm… appearently you need to add token(as empty)
I’m not sure that was the source of issue but it seem to have fixed it for me
(also rebooting could help if you reached my place, for future refrence)
BIG THANKS, @shashi