I trained a custom YOLOv8s model to classify Asian hornets and look-a-like insects today. Since I don’t have access to an x86 Unix machine, I utilized Google Colab and Google Cloud Platform for training, optimization, and compilation.
If you’re looking to train a custom model for object detection and don’t have an x86 Unix machine, this cloud-based approach could be an alternative. I’ve shared my notes on GitHub, and it’s still a work in progress, so any feedback or suggestions are welcome!
I used Google Cloud Platform (not AWS, although it might be cheaper for similar functionality). I took advantage of the free tier’s e2-micro machine type to get started with running Jupyter notebooks. These settings have worked well so far, but upgrading resources is always an option for faster optimization. Here’s what I’m using currently:
Region/Zone: Select a region based on your location or target audience (affects latency). Choose a zone within the region for redundancy (availability).
Machine configuration: select E2, select e2-standard-32gb (16gb to save costs), available policies (optional): GCP offers Spot VMs for lower costs. However, these VMs can be interrupted if needed by GCP, but 60-90% discount.
Boot disk: Operating system: select Ubuntu 20.04. Optional: Boot disk type: select Standard persistent disk to save costs. Size (GB): 500 GB and resize upwars later if needed. Downgrading size is not possible.
Firewall: allow traffic on port 80 (default for HTTP (only needed for Jupyter notebook on local browser, fe for the tutorial)