I’m working on using the depth models from the zoo, fast_depth scdepthv3. I think I need post-processing config or code for fast_depth since I am not getting the expected output, only the camera output. Note that I do configure a post-processing JSON file but it contains only the location of the model file. For scdepthv3, without matching post-processing config, I’m getting the following errors:
I think I would prefer to use fast_depth if that helps. I’m using RP5-Hailo 8 HAT board. I’d appreciate any assistance on this. I’d like to use depth detection in motion control of a hobby-sized vehicle.
By the way, thanks for your great products. The inference capability in object detection is amazing!
To get started with FastDepth on your RP5-Hailo 8 HAT, follow these steps:
Configure the Model: Ensure that the input tensor shape and format of your FastDepth model match the requirements of your pipeline. Use a JSON configuration file to specify the preprocessing and inference steps.
Here’s an example JSON snippet:
Implement Post-Processing: Use Python code to convert the output tensor from FastDepth into a meaningful depth map. Here’s an example of how you can visualize the depth map:
import numpy as np
import cv2
def visualize_depth(depth_map):
# Normalize the depth map to the range of 0-255
depth_map = cv2.normalize(depth_map, None, 0, 255, cv2.NORM_MINMAX)
depth_image = depth_map.astype(np.uint8)
cv2.imshow('Depth Map', depth_image)
cv2.waitKey(0)
# Assume 'output' is the model inference output tensor
visualize_depth(output.reshape((224, 224))) # Adjust the shape as necessary
I was able to get the Fast Depth model working using the Tappas example for Depth Estimation and changed the input from a file to a camera.
However, I do want to use the information you gave me to build my own post processsing. I wasn’t able to get hailopython to work for any of the examples from the Python post processing doc - a SIGSEGV error is thrown. For example, I used:
import hailo
# Importing VideoFrame before importing GST is must
from gsthailo import VideoFrame
from gi.repository import Gst
def post_process_function(video_frame: VideoFrame):
print("My first Python postprocess!")
def other_post_function(video_frame: VideoFrame):
print("Other Python postprocess!")
and got:
Caught SIGSEGV
#0 0x00007fff2c5c82e4 in __GI___wait4 (pid=5176, stat_loc=0x7fffc1ea75ac, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
#1 0x00007fff2c7747d0 in g_on_error_stack_trace () at /lib/aarch64-linux-gnu/libglib-2.0.so.0
#2 0x000055560cf4626c in ()
#3 0x00007fff2ca887b0 in <signal handler called> ()
#4 0x00007fff23ad65b0 in _PyErr_Fetch () at /lib/aarch64-linux-gnu/
libpython3.11.so.1.0
...
Since you’ve already begun working with the Tappas depth estimation, I suggest building upon the code in this repository: GitHub - hailo-ai/hailo-rpi5-examples
This repo contains Tappas that has been optimized specifically for the RPI + H8 . It includes inference and pipeline building capabilities built right in. From there, you have the flexibility to write your own post-processing logic in either C++ or Python, depending on your preference.
Additionally, if you’re interested in learning how to utilize Hailopython, the examples in this repository demonstrate its usage.
For more specific about you’re SEGSEGV , can you please provide the log ?