Onur TaşarS L HappyYuliya TarabalkaPierre Alliez
Due to the various reasons such as atmospheric effects and differences in\nacquisition, it is often the case that there exists a large difference between\nspectral bands of satellite images collected from different geographic\nlocations. The large shift between spectral distributions of training and test\ndata causes the current state of the art supervised learning approaches to\noutput unsatisfactory maps. We present a novel semantic segmentation framework\nthat is robust to such shift. The key component of the proposed framework is\nColor Mapping Generative Adversarial Networks (ColorMapGAN), which can generate\nfake training images that are semantically exactly the same as training images,\nbut whose spectral distribution is similar to the distribution of the test\nimages. We then use the fake images and the ground-truth for the training\nimages to fine-tune the already trained classifier. Contrary to the existing\nGenerative Adversarial Networks (GANs), the generator in ColorMapGAN does not\nhave any convolutional or pooling layers. It learns to transform the colors of\nthe training data to the colors of the test data by performing only one\nelement-wise matrix multiplication and one matrix addition operations. Thanks\nto the architecturally simple but powerful design of ColorMapGAN, the proposed\nframework outperforms the existing approaches with a large margin in terms of\nboth accuracy and computational complexity.\n
Onur TasarS. L. HappyYuliya TarabalkaPierre Alliez
Bilel BenjdiraYakoub BaziAnis KoubaaKais Ouni
Yogesh BalajiSwami SankaranarayananRama Chellappa
Yogesh BalajiSwami SankaranarayananRama Chellappa