Raspberry Cropping image

Hello, I searched to see if anyone else was dealing with this problem, but I couldn’t find anything. I am using a Raspberry Pi 5 with a Hailo-8L AI HAT. I found that the model output is cropped and mirrored (which isn’t a big problem). When the output is cropped, it means that some objects that the camera sees are not detected by the model because they are cropped out of the frame. And that’s a bit of a problem in real life.

I don’t really mind the resolution, so I can lower it. But I would like to have the entire image available for processing. Do you know which function in gstreamer_helper_pipelines or estimation_pipeline does this? Or is it perhaps hidden somewhere? Or do you have any idea how to make it work?

I tryed editing functionget_pipeline_string(self)but I think this is not the way I shut go.

Cropped image from model output

Original input of Raspberry Camera 3

Hi @user701 ,

Can you please share what model and app type you are using?
Have you used one of the apps from: GitHub - hailo-ai/hailo-apps · GitHub?

Thanks,

I using your example: GitHub - hailo-ai/hailo-rpi5-examples · GitHub

And from model I use pose_estimation.py and detection.py but output cropping it’s similar. So that can happend in main libraly.

Hi @user701 ,

  1. Mirror: hailo-apps/hailo_apps/python/core/gstreamer/gstreamer_helper_pipelines.py at main · hailo-ai/hailo-apps · GitHub mirror_image=True is the default. You can disable it by passing --no-mirror (if your app supports it) or by setting mirror_image=False in the SOURCE_PIPELINE call.

  2. Crop: In the same file you can see scaling (resolution) - Cameras might use different sensor readout modes depending on the resolution. At lower resolutions (e.g., 640x480), the camera sensor may use a center-crop of the sensor rather than reading the full sensor and downscaling. This is a hardware/driver-level behavior - it happens before frames reach GStreamer.

Thanks,