Missing tensors for rpicam examples on ubuntu 24.04

Hi, I have compiled the driver, hailort, and tappas – I will note that I have to modify things requiring hailo-tappas-core to require hailo_tappas. The sanity check works, (once I have to reboot after I close it if I want to run the tappas sanity check again) – the hailo driver is loaded and everything looks good from that end. rpicam apps detetected hailo post processor but it appears to be missing tensors to run any of the examples.

(venv_hailo_rpi5_examples) matteius@matteius-desktop:~/rpicam-apps/assets$   rpicam-hello -t 0 --post-process-file hailo_yolov6_inference.json --lores-width 640 --lores-height 640
[0:14:35.902245123] [4510]  INFO Camera camera_manager.cpp:313 libcamera v0.3.0+65-6ddd79b5
[0:14:35.925029908] [4514]  INFO RPI pisp.cpp:695 libpisp version v1.0.6 b567f0455680 19-06-2024 (22:56:15)
[0:14:35.946138679] [4514]  INFO RPI pisp.cpp:1154 Registered camera /base/axi/pcie@120000/rp1/i2c@80000/imx477@1a to CFE device /dev/media0 and ISP device /dev/media1 using PiSP variant BCM2712_C0
Made X/EGL preview window
Reading post processing stage "hailo_yolo_inference"
No post processing stage found for "object_detect_draw_cv"
[0:14:36.063445110] [4510]  WARN V4L2 v4l2_pixelformat.cpp:344 Unsupported V4L2 pixel format RPBP
[0:14:36.063636202] [4510]  WARN V4L2 v4l2_pixelformat.cpp:344 Unsupported V4L2 pixel format RPBP
Mode selection for 2028:1520:12:P
    SRGGB10_CSI2P,1332x990/0 - Score: 3456.22
    SRGGB12_CSI2P,2028x1080/0 - Score: 1083.84
    SRGGB12_CSI2P,2028x1520/0 - Score: 0
    SRGGB12_CSI2P,4056x3040/0 - Score: 887
Stream configuration adjusted
[0:14:36.063983182] [4510]  INFO Camera camera.cpp:1183 configuring streams: (0) 2028x1520-YUV420 (1) 640x640-YUV420 (2) 2028x1520-BGGR_PISP_COMP1
[0:14:36.064175552] [4514]  INFO RPI pisp.cpp:1450 Sensor: /base/axi/pcie@120000/rp1/i2c@80000/imx477@1a - Selected sensor format: 2028x1520-SBGGR12_1X12 - Selected CFE format: 2028x1520-PC1B
terminate called after throwing an instance of 'std::invalid_argument'
  what():  No tensor with name yolov5m_wo_spp_60p/yolov5_nms_postprocess
Aborted (core dumped)

Hi @matt,
You’re right this looks like a bug in the tappas-core post-process library implementation, also maybe in the rpi-cam apps.
It seems that the yolov6 have been externed, let us check on this further.
In the meantime, can you use the yolov5 variant?

1 Like

Just tried a yolov4 variant, didn’t get that error but not seeing a box around my face either, possibly related to this in the output? No post processing stage found for "object_detect_draw_cv"

(hailo_tappas_venv) matteius@matteius-desktop:~/rpicam-apps/assets$ rpicam-hello -t 0 --post-process-file ~/rpicam-apps/assets/hailo_yolov5_personface.json --lores-width 640 --lores-height 640
[3:17:28.816910806] [8754]  INFO Camera camera_manager.cpp:313 libcamera v0.3.0+65-6ddd79b5
[3:17:28.841426220] [8758]  INFO RPI pisp.cpp:695 libpisp version v1.0.6 b567f0455680 19-06-2024 (22:56:15)
[3:17:28.858275463] [8758]  INFO RPI pisp.cpp:1154 Registered camera /base/axi/pcie@120000/rp1/i2c@80000/imx477@1a to CFE device /dev/media0 and ISP device /dev/media1 using PiSP variant BCM2712_C0
Made X/EGL preview window
Reading post processing stage "hailo_yolo_inference"
No post processing stage found for "object_detect_draw_cv"
[3:17:28.968272256] [8754]  WARN V4L2 v4l2_pixelformat.cpp:344 Unsupported V4L2 pixel format RPBP
[3:17:28.968489237] [8754]  WARN V4L2 v4l2_pixelformat.cpp:344 Unsupported V4L2 pixel format RPBP
Mode selection for 2028:1520:12:P
    SRGGB10_CSI2P,1332x990/0 - Score: 3456.22
    SRGGB12_CSI2P,2028x1080/0 - Score: 1083.84
    SRGGB12_CSI2P,2028x1520/0 - Score: 0
    SRGGB12_CSI2P,4056x3040/0 - Score: 887
Stream configuration adjusted
[3:17:28.968879494] [8754]  INFO Camera camera.cpp:1183 configuring streams: (0) 2028x1520-YUV420 (1) 640x640-YUV420 (2) 2028x1520-BGGR_PISP_COMP1
[3:17:28.969094141] [8758]  INFO RPI pisp.cpp:1450 Sensor: /base/axi/pcie@120000/rp1/i2c@80000/imx477@1a - Selected sensor format: 2028x1520-SBGGR12_1X12 - Selected CFE format: 2028x1520-PC1B

Hi @matt,

Which version of Tappas did you use? This app is built for a specific version of Tappas for Raspberry Pi, packaged in the first Tappas-core release. It will be merged into the main repository in the next Tappas release. I fixed the post-process to support more networks that use the Hailort NMS post-process. If you used Tappas 3.28.0/1, I can send you the patch.

@giladn I used the latest tappas git repository to build it. v3.28.1 it looks like

commit 4341aa360b7f8b9eac9b2d3b26f79fca562b34e4 (HEAD -> master, tag: v3.28.1, origin/master, origin/HEAD)
Author: HailoRT-Automation <[email protected]>
Date:   Mon May 20 10:52:09 2024 +0300

    v3.28.1

I wanted to use Ubuntu 24.04 for this setup and took some steps to make it happen – would love to try your patch.

Hi,
Please note that hailoRT and Hailo Driver are available in our developer zone and you don’t need to recompile them.
If you do use the ones from our developer zone use tappas v3.28.0

To update the postprocess update these files in your tappas repo and recompile them see instructions in: tappas/docs/write_your_own_application/compiling-your-code.rst at 4341aa360b7f8b9eac9b2d3b26f79fca562b34e4 · hailo-ai/tappas · GitHub

tappas/core/hailo/libs/postprocesses/detection/yolo_hailortpp.cpp
tappas/core/hailo/libs/postprocesses/detection/yolo_hailortpp.hpp

yolo_hailortpp.cpp

#include <regex>
#include "hailo_nms_decode.hpp"
#include "yolo_hailortpp.hpp"
#include "common/labels/coco_eighty.hpp"

static const std::string DEFAULT_YOLOV5S_OUTPUT_LAYER = "yolov5s_nv12/yolov5_nms_postprocess";
static const std::string DEFAULT_YOLOV5M_OUTPUT_LAYER = "yolov5m_wo_spp_60p/yolov5_nms_postprocess";
static const std::string DEFAULT_YOLOV5M_VEHICLES_OUTPUT_LAYER = "yolov5m_vehicles/yolov5_nms_postprocess";
static const std::string DEFAULT_YOLOV8S_OUTPUT_LAYER = "yolov8s/yolov8_nms_postprocess";
static const std::string DEFAULT_YOLOV8M_OUTPUT_LAYER = "yolov8m/yolov8_nms_postprocess";

static std::map<uint8_t, std::string> yolo_vehicles_labels = {
    {0, "unlabeled"},
    {1, "car"}};

void yolov5(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    auto post = HailoNMSDecode(roi->get_tensor(DEFAULT_YOLOV5M_OUTPUT_LAYER), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov5s_nv12(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    auto post = HailoNMSDecode(roi->get_tensor(DEFAULT_YOLOV5S_OUTPUT_LAYER), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov8s(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    auto post = HailoNMSDecode(roi->get_tensor(DEFAULT_YOLOV8S_OUTPUT_LAYER), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov8m(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    auto post = HailoNMSDecode(roi->get_tensor(DEFAULT_YOLOV8M_OUTPUT_LAYER), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolox(HailoROIPtr roi)
{
    auto post = HailoNMSDecode(roi->get_tensor("yolox_nms_postprocess"), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov5m_vehicles(HailoROIPtr roi)
{
    auto post = HailoNMSDecode(roi->get_tensor(DEFAULT_YOLOV5M_VEHICLES_OUTPUT_LAYER), yolo_vehicles_labels);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov5m_vehicles_nv12(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    auto post = HailoNMSDecode(roi->get_tensor("yolov5m_vehicles_nv12/yolov5_nms_postprocess"), yolo_vehicles_labels);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov5s_personface(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    auto post = HailoNMSDecode(roi->get_tensor("yolov5s_personface_nv12/yolov5_nms_postprocess"), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    hailo_common::add_detections(roi, detections);
}

void yolov5_no_persons(HailoROIPtr roi)
{
    auto post = HailoNMSDecode(roi->get_tensor(DEFAULT_YOLOV5M_OUTPUT_LAYER), common::coco_eighty);
    auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
    for (auto it = detections.begin(); it != detections.end();)
    {
        if (it->get_label() == "person")
        {
            it = detections.erase(it);
        }
        else
        {
            ++it;
        }
    }
    hailo_common::add_detections(roi, detections);
}

void filter(HailoROIPtr roi)
{
    if (!roi->has_tensors())
    {
        return;
    }
    std::vector<HailoTensorPtr> tensors = roi->get_tensors();
    // find the nms tensor
    for (auto tensor : tensors)
    {
        if (std::regex_search(tensor->name(), std::regex("nms_postprocess"))) 
        {
            auto post = HailoNMSDecode(tensor, common::coco_eighty);
            auto detections = post.decode<float32_t, common::hailo_bbox_float32_t>();
            hailo_common::add_detections(roi, detections);
        }
    }
}

yolo_hailortpp.hpp

/**

* Copyright (c) 2021-2022 Hailo Technologies Ltd. All rights reserved.

* Distributed under the LGPL license (https://www.gnu.org/licenses/old-licenses/lgpl-2.1.txt)

**/

#pragma once

#include "hailo_objects.hpp"

#include "hailo_common.hpp"

__BEGIN_DECLS

void filter(HailoROIPtr roi);

void yolov5(HailoROIPtr roi);

void yolov5s_nv12(HailoROIPtr roi);

void yolov8s(HailoROIPtr roi);

void yolov8m(HailoROIPtr roi);

void yolox(HailoROIPtr roi);

void yolov5s_personface(HailoROIPtr roi);

void yolov5_no_persons(HailoROIPtr roi);

void yolov5m_vehicles(HailoROIPtr roi);

void yolov5m_vehicles_nv12(HailoROIPtr roi);

__END_DECLS

@giladn Thanks, I just tried this – everything compiled, I moved some shared library objects to the right space to be loaded, but I still get a warning/error No post processing stage found for “object_detect_draw_cv”.

It seems everything is working except its ability to write back to the preview window the result.

matteius@matteius-desktop:/opt/hailo/tappas/lib$ rpicam-hello -t 0 --post-process-file ~/rpicam-apps/assets/hailo_yolov5_personface.json --lores-width 640 --lores-height 640
[0:09:17.676971366] [4616] INFO Camera camera_manager.cpp:313 libcamera v0.3.0+65-6ddd79b5
[0:09:17.698146052] [4620] INFO RPI pisp.cpp:695 libpisp version v1.0.6 b567f0455680 19-06-2024 (22:56:15)
[0:09:17.720807246] [4620] INFO RPI pisp.cpp:1154 Registered camera /base/axi/pcie@120000/rp1/i2c@80000/imx477@1a to CFE device /dev/media0 and ISP device /dev/media2 using PiSP variant BCM2712_C0
Made X/EGL preview window
HailoRT warning: Cannot create log file hailort.log! Please check the directory . write permissions.
Reading post processing stage “hailo_yolo_inference”
No post processing stage found for “object_detect_draw_cv”
[0:09:17.826855895] [4616] WARN V4L2 v4l2_pixelformat.cpp:344 Unsupported V4L2 pixel format RPBP
[0:09:17.827069801] [4616] WARN V4L2 v4l2_pixelformat.cpp:344 Unsupported V4L2 pixel format RPBP
Mode selection for 2028:1520:12:P
SRGGB10_CSI2P,1332x990/0 - Score: 3456.22
SRGGB12_CSI2P,2028x1080/0 - Score: 1083.84
SRGGB12_CSI2P,2028x1520/0 - Score: 0
SRGGB12_CSI2P,4056x3040/0 - Score: 887
Stream configuration adjusted
[0:09:17.827427225] [4616] INFO Camera camera.cpp:1183 configuring streams: (0) 2028x1520-YUV420 (1) 640x640-YUV420 (2) 2028x1520-BGGR_PISP_COMP1
[0:09:17.827633760] [4620] INFO RPI pisp.cpp:1450 Sensor: /base/axi/pcie@120000/rp1/i2c@80000/imx477@1a - Selected sensor format: 2028x1520-SBGGR12_1X12 - Selected CFE format: 2028x1520-PC1B

I tried on Raspberry OS on a different device and it worked pretty much out of the box, but its not clear yet how to tune things once I get that far. Seems the rpi examples all feed in a fairly low res 640x640 image, and I was expecting to see more class of objects detected than what it was really doing. Though it did detect me and a TV near by.

I got yolov8 inference (and some other ones) working on Ubuntu finally. I had to remove and rebuild my rpicam buildir after rebuilding tappas, somehow I got it to work. Where we can we see what all its trained to recognize? It certainly detects objects, but there are plenty of objects it appears to not know about,.

Hi @matt , This network is trained on the coco dataset (80 classes).
You can see all the classes in the coco_eighty.hpp file in TAPPAS.
For RPi users which installed the tappas-core pkg (by installing apt install hailo-all) the classes file can be found in /usr/include/hailo/tappas/common/labels/coco_eighty.hpp

Important note to other users how want to run Ubuntu on the Pi:
To use Ubuntu on the RPi you will have to compile all of Hailo dependencies and RPi dependencies by yourself.
If you do not have a very good reason don’t do it.
We invested a lot to make the RPi installation as simple and robust as possible, we will not be able to support costumers that will choose another path.

Thanks for the info on coco – lets say I want to train on another dataset – is there a tutorial or guide on how to do this?

Also I’ve started moving past the rpicam examples, I am trying to run the gstreamer examples, and getting:


CHECK_EXPECTED_AS_STATUS failed with status=26
[HailoRT] [error] HEF format is not compatible with device. Device arch: HAILO8L, HEF arch: HAILO8
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_HEF(26)
CHECK_EXPECTED_AS_STATUS failed with status=26

It seems even specifying python main.py rpi doesn’t download compatible models.

Also, I want to provide feedback that I think its a bit short sighted to not support one of the two major OS for raspberry pi. Its definitely tenable to get it working on Ubuntu, especially for users familiar with compiling code. I may try to write up a tutorial on it once I get further with what I am trying to do. Its not like the Raspberry OS examples which are pre-packages get me very far anyway–I suspect as soon as someone wants to do something out of box, they’ll be in the same spot as me trying to figure out how to train models for this platform and having to rebuild everything on their target OS.

The one thing I’ll add for users trying to get this far on Ubuntu, beyond compiling things, I also had to relocate some models and so libraries on the path.

Hi @matt Nice to finally see you :wink:
The error you get is because you are using a H8 HEF on a H8L device.

I would very appreciate if you could build a guide on how to use Hailo with Ubuntu OS on the Pi. It will be very useful for anyone who want to run Ubuntu on the Pi. Yet this is not something we can support officially. I understand your frustration but Ubuntu is not supported officially by RPi, we will not be able to support it from our side. See RPi post on their forum: STICKY: where to go to get Ubuntu support for the raspberry PI range of computers.

Note that the HEF files are compatible to all OS you are not expected to train a network on the Pi.

For retraining see this documentation in our Model Zoo github: RETRAIN_ON_CUSTOM_DATASET

We will soon release a tutorial describing the entire process from retraining a model to getting it running on the Pi. We know that you and the community have been waiting, and we appreciate your patience. Our goal is to ensure that, when available, it will be easy to use and deploy.