Hi,
I’m looking for C/C++ examples of code detection with jpeg image input. The examples in “hailo-platform” give codeseg error for hailo8L.
I’m trying to do the exact same thing, and not winning so far.
What model are you trying to use? I’m trying YOLOv8s.
When I get the output, the shape is (80, 100, 0), which is weird. The byte size of the output is 160320 bytes, which is also a strange number. The output format is FLOAT32, an the order is “HAILO NMS”, but I don’t know what this means. The demo code assumes that the output is UINT8, but it’s obviously FLOAT32. I’ll post more here, and a standalone working example, if I get it right.
I confirm yolov8s for Hailo8L, but I have not done any further tests.
Hi,
We’ve made some optimization for the Raspberry pi5, that haven’t yet merged into main. You can check the picam apps, should be preinstalled with the Pi image:
rpicam-apps/post_processing_stages/hailo/hailo_yolo_inference.cpp at 06cc09aeab912867e6d0e3f59a548aa137458a68 · raspberrypi/rpicam-apps (github.com)
Generally speaking, the reccommendation is to use the hailo post-processing node as part of the compiled hef, and this is the reason fro the “weird” shape that you see.
Here’s a smaple code to process the output if the NMS is part of the HEF:
/*
The data is sorted by the number of the classes.
for each class - first comes the number of boxes in the class, then the boxes one after the other,
each box contains x_min, y_min, x_max, y_max and score
*/
void print_boxes_coord_per_class(std::vector<float32_t> data, float32_t thr=0.35)
{
size_t index=-1;
size_t class_idx=0;
while (class_idx<80) {
auto num_of_class_boxes = data.at(++index);
for (auto box_idx=0;box_idx<num_of_class_boxes;box_idx++) {
auto y_min = data.at(++index);
auto x_min = data.at(++index);
auto y_max = data.at(++index);
auto x_max = data.at(++index);
auto confidence = data.at(++index);
if (confidence>=thr) {
std::string label = common::coco_eighty[class_idx + 1];
std::cout << "-I- Class [" << label << "] box ["<< box_idx << "] conf: " << confidence << ": ";
std::cout << "(" << x_min << ", " << y_min << ", " << x_max << ", " << y_max << ")" << std::endl;
}
}
class_idx++;
}
}
The Hailo-Application-Examples provided a good starting point for me. At least I managed to get inference on a single image running using python.
As a working example for RPi5 with a jpeg image as input go to
compiled as per instructions but download the hef model for Hailo8L from
I downloaded the compiled version of yolov5m_wo_spp
yolov5m_wo_spp_60p.hef
run as per instructions, as input you can indicate a jpeg file
example
./build/x86_64/vstream_yolov5_yolov7_example_cpp -hef=yolov5m_wo_spp_60p.hef -input=zidane.jpg -arch=yolov5
Thanks so much!
This makes 100% sense on the outputs I’m seeing.
I’m going to put together a standalone C++ demo for Rpi5+Hailo8L, because I think the SDK/example code could really do with simpler examples.
I’ve created a minimal standalone demo that reads just a single image file, runs inference on it, and prints the results to the console:
(sorry, I’m not allowed to include an actual HTTP link in my post)
I find it much easier to start from a minimal example like this.
Hi, congratulations for the essentiality of the demo, with a resize of the image it is perfect.
Hey @rogojin,
Nice work! Thanks for contributing to the community.
Can’t wait to see what awesome projects you come up with next.
Regards
HEY
I’ve successfully tested the ‘general_detection_inference’ example with yolov8s.hef
, available at this GitHub link
(Hailo-Application-Code-Examples/runtime/cpp/detection/general_detection_inference at main · hailo-ai/Hailo-Application-Code-Examples · GitHub).
However, I’m encountering issues with other .hef files downloaded from the Hailo Model Zoo, such as Yolov8m.hef
or Yolov8L.hef
.
The error message is shown as follows:
./vstream_detection_example_cpp -hef=yolov8n.hef -input=test-image-640x640.jpg
[HailoRT] [error] CHECK failed - HEF file length does not match
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
[HailoRT] [error] Failed parsing HEF file
[HailoRT] [error] Failed creating HEF
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
Failed to configure network group yolov8n.hef
Could someone please help me understand what might be going wrong?
Thank you!
Hey @brad.pai,
I noticed that you’re having an issue with your model. Can you please let me know which versions of the DFC (Data Flow Compiler) and HailoRT you are using?
The problem you’re experiencing is likely due to compatibility issues between the model compiled with an older version of the DFC and the newer HailoRT version 4.18. To resolve this, you will need to make sure that you are using the same versions of the DFC and HailoRT when compiling your model and deploying it.
For example, if your model was compiled using DFC version 3.27, you should be using HailoRT version 4.17 or earlier. Conversely, if your model was compiled using DFC version 3.28 or later, you should be using HailoRT version 4.18 or later.
Best regards,
How to check HailoRT version and update?
I work on raspberry pi 5 with AI KIT
Thanks
I have checked the packages on my Raspberry Pi 5 system.
I have only installed the ‘hailo-all’ package, version 3.28.2+1.
Should I also install the HailoRT software on my Raspberry Pi?
easiest way to check version:
hailortcli --version
My HailoRT-CLI version is 4.17.0, how to update to 4.18?
Hey @brad.pai,
I’d recommend checking out this community topic for more information on how to resolve this:
The topic provides guidance on two potential solutions:
- Upgrade to HailoRT version 4.18, which may resolve the parsing issue.
- Try using the previous HEF files, which may be compatible with the version you’re currently using.
Please let me know if you have any other questions! I’m happy to provide further assistance.
It is not enough to just do that
- Have to build TAPAS from source. This includes some tweaks to installer script as the installation script expect /etc/lsb-release. Which doesnt exist on rpi5
- arcface.cpp, #define has to be changed to match the output tensor with the 4.18.0 arcface output tensor name which is downloaded from model zoo
- post that uninstall hailo-all, such that the .so from the tappas build are used.
Post that it works