Two questions:
1: How does the hailo picamera2 library create the low res image for inference? Does it crop it or stretch it?
2: If I plan to train a model on a specific resolution, does it make sense to have all my captured detection images that I will be using for training at that same resolution? My logic is that there is a constant resolution through the whole pipeline, I understand it’s good to have variation to make a model more robust but I plan on keeping this camera pointing at the same spot forever.
Thank you