Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Learning to predict accurately from a few data samples is a central challenge in modern data-hungry machine learning. On natural images, human vision typically outperforms deep learning approaches on few-shot learning. However, we hypothesize that aerial and satellite images are more challenging to the human eye. This applies particularly when the image resolution is comparatively low, as with the 10m ground sampling distance of Sentinel-2. In this study, we benchmark model-agnostic meta-learning (MAML) algorithms against human participants on few-shot land cover classification with Sentinel-2 imagery on the Sen12MS dataset. We find that categorization of land cover from globally distributed regions is a difficult task for the participants, who classified the given images less accurately than the MAML-trained model and with a highly variable success rate. This suggest that hand-labeling land cover directly on Sentinel-2 imagery is not optimal when tackling a new land cover classification problem. Labeling only a few images and employing a trained meta-learning model to this task may lead to more accurate and consistent solutions compared to hand labeling by multiple individuals.
Devis Tuia, Benjamin Alexander Kellenberger, Marc Conrad Russwurm