Lane detection example

Hello

I have successfully generated a .hef file for the Ultra Lane Detection model using the attached model. When I queried the model, it correctly reported the input and output tensors. However, I’m unclear on the correct procedure to run and validate the model.

I created a Python script to load an image and output the lane locations, but the results do not appear correct. Could you please help identify whether the issue lies in the .hef file or in my code implementation?

Any guidance would be greatly appreciated.

the .hef and converter script attached here
(venv_hailo_rpi5_examples) dell@FrontCam:~/hailo-rpi5-examples/basic_pipelines $ hailortcli parse-hef ufld_v2_tusimple.hef
Architecture HEF was compiled for: HAILO8L
Network group name: ufld_v2_tusimple, Multi Context - Number of contexts: 3
Network name: ufld_v2_tusimple/ufld_v2_tusimple
VStream infos:
Input ufld_v2_tusimple/input_layer1 UINT8, NHWC(320x1600x3)
Output ufld_v2_tusimple/conv21 UINT8, FCR(10x50x8)

Hey @Mohamed_Fouad,

Welcome to the Hailo Community!

I’ll look into this issue and get back to you with more details soon.

In the meantime, I’d suggest trying to run your model using the detection pipeline from the Raspberry Pi examples. A couple of questions to help troubleshoot:

What post-processing method are you using? If you’re using custom post-processing logic, you’ll need to implement that and test it out.

Try running it on some dashcam footage or car camera video that shows lane markings - this should give you a good sense of whether the detection is working properly.

I’ll update you once I’ve had a chance to dig deeper into your specific setup.

Thank you for the reply.

I am using laneonly.py to save the inference arrays in txt file and the try to map the arrays to the input image.

I am using lane and road image as an input.

if I used detection pipeline it will need huge modification as it is not compatible with lanes