Hailo 8 limitations

Dont think my first post got through, trying again:

Im unsure if the hailo 8 accelerator on a stack with imx8m plus fits our usecase. The dream was to enhance a 1080p60 (possibly lower, but no lower than 720p30), and perform tasks like image enhancement / color correction / denoising / etc without introducing too much delay / fps / quality loss of the original image.

Is this a task that requires significant computing power, like a topside dedicated graphics card?

another usecase would be depth estimation, but still we would want to retain the high fps/resolution. is this too computationally expensive to be done?

Hey @morten.berge,

Welcome to the Hailo Community!

The tasks you mentioned are definitely achievable with the Hailo-8 processor. Let me break it down for you:

Image Enhancement (Super-Resolution) We have models in our Hailo Model Zoo that can handle image super-resolution at high frame rates. You’ll find the relevant models there.

Denoising and Color Correction Similar to super-resolution, we have optimized models for denoising and color correction tasks. Check out the Model Zoo for the specific models that fit your needs.

Performance Targets If you’re aiming for 720p resolution at 30 FPS, you should be able to achieve smooth performance with a well-optimized streaming setup and efficient model implementations. The Hailo-8 is capable of delivering the required processing power.

Depth Estimation Running depth estimation models at high frame rates is definitely feasible with the Hailo-8. Our model zoo includes depth estimation models that are optimized for performance.

Retraining Networks Retraining networks for the Hailo-8 is not computationally expensive. You don’t need a high-end GPU setup. We typically train our networks on a single GPU (RTX 4060 or 4070) using a regular PC. The training time depends on factors like network complexity, dataset size, and the level of optimization required:

  • For basic training (without performance mode), it usually takes a few hours.
  • For full optimization to achieve maximum performance, it can take up to 2 days.
  • You can train youre network on GTX 1080TI or RTX 2080 or other GPU’s ( any Nvidia GPUs with Pascal, Turing, or Ampere architectures )

Feel free to reach out if you have any further questions or need additional guidance. We’re here to help you make the most of the Hailo-8 processor! :rocket:

Thank you so much for the reply.

I have to do more research, as im still a bit fresh to the AI world :slight_smile: I guess im just not certain of how getting a 720/1080p output looks like in practice (gstreamer/python) when the input on the models is so low.

Where can i locate this color correction model you mention? I haven’t seen anything about color correction in the model zoo, but this would be perfect to test :smiley:

Thanks in advance,
Morten

Where can i locate the color correction model? And would stitching this kind of processing result in ugly squares around the “stitches”?

Hey @morten.berge ,

This is the Denoising and Color Correction

I don’t think this should get ugly squares but if you are facing it you can use
Hailo TAPPAS includes a Hailo Tile Aggregator that helps merge tiles together without visible seams​. If artifacts appear, you may need overlapping tiles with blending techniques.

These are two denoising models, one for coloured images, and one for black and white.

If anyone is looking for color correction like funieGAN / other this is not it

Have you already successfully compiled one, or do you need help with compilation?