Hello Hailo community,
Our team has developed a vision system using OpenCV and YOLOv5 for detection and tracking. We’re now exploring how to migrate this system to utilize the Hailo NPU acceleration on Raspberry Pi 5, and we’re looking for guidance on the best approach.
Current system:
- Using YOLOv5 models for object detection (particularly interested in barcode detection)
- OpenCV for image processing and visualization
- Custom tracking algorithms for persistent object identification
- Python-based application running on standard CPU
What we’ve tried:
- We’ve installed the Hailo SDK on our Raspberry Pi 5
- Run some basic examples from the hailo-rpi5-examples repository
- Looked at the GStreamer-based pipeline architecture
Specific challenges:
- Understanding how to replace our YOLOv5 inference with Hailo-accelerated inference while maintaining our custom post-processing
- Accessing raw frame data with OpenCV in the Hailo callback system
- Determining the origin and usage of callback parameters (e.g.,
app_callback(pad, info, user_data)
) - Finding the best way to integrate our existing OpenCV-based tracking and visualization code
Questions:
- Is there documentation or examples specifically for migrating YOLOv5-based systems to Hailo?
- What’s the recommended approach for integrating custom OpenCV processing in the Hailo callback function?
- Are there specific optimizations we should consider when moving from a pure CPU implementation to Hailo-accelerated inference?
- How can we preserve our custom tracking logic while benefiting from Hailo’s acceleration?
- What’s the learning path you’d recommend for someone familiar with OpenCV/YOLOv5 but new to Hailo?
Any guidance, examples, or recommended resources would be greatly appreciated. We’re particularly interested in a step-by-step integration path rather than completely rewriting our application.
Thank you!