Fine-tuning LPRnet

Hey everyone!

I have a quick question regarding the retraining/fine-tuning of the LPRnet model provided, as I need the model to handle alphanumeric content, not just numbers. For that use case, do I need to train the model from scratch, or is it better to fine-tune from the pre-trained model?

Many thanks!

Hey @Nils-Oliver ,

Fine-tuning the pre-trained LPRNet model is recommended over training from scratch as it:

  • Leverages existing knowledge
  • Requires less training time and data
  • Maintains model efficiency