Improving Land Cover Segmentation Across Satellites Using Domain Adaptation
Land use and land cover mapping is essential to various fields of study, such as forestry, agriculture, and urban management. Generally, earth observation satellites facilitate and accelerate the mapping process. Subsequently, deep learning methods have been proven to be excellent in automating the mapping via semantic image segmentation. However, because deep neural networks require large amounts of labeled data, it is not easy to exploit the full potential of satellite imagery. Additionally, land cover tends to differ in appearance from one region to another; therefore, having labeled data from one location does not necessarily help map others. Furthermore, satellite images come in various multispectral bands, which range from RGB to over 12 bands. In this study, our aim is to use domain adaptation (DA) to solve the aforementioned problems. We applied a well-performing DA approach on the DeepGlobe land cover dataset as well as datasets that we built using RGB images from Sentinel-2, WorldView-2, and Pleiades-1B satellites with CORINE Land Cover as ground truth (GT) labels. The experiments revealed significant improvements over the results obtained without using DA. In some cases, an improvement of over 20% mean intersection over union was obtained. Sometimes, our model manages to correct errors in the GT labels.