In this paper, we deal with the problem of object detection on remote sensing images. Previous methods have developed numerous deep CNN-based methods for object detection on remote sensing images and the report remarkable achievements in detection performance and efficiency. However, current CNN-based methods mostly require a large number of annotated samples to train deep neural networks and tend to have limited generalization abilities for unseen object categories. In this paper, we introduce a few-shot learning-based method for object detection on remote sensing images where only a few annotated samples are provided for the unseen object categories. More specifically, our model contains three main components: a meta feature extractor that learns to extract feature representations from input images, a reweighting module that learn to adaptively assign different weights for each feature representation from the support images, and a bounding box prediction module that carries out object detection on the reweighted feature maps. We build our few-shot object detection model upon YOLOv3 architecture and develop a multi-scale object detection framework. Experiments on two benchmark datasets demonstrate that with only a few annotated samples our model can still achieve a satisfying detection performance on remote sensing images and the performance of our model is significantly better than the well-established baseline models.
Xingyu ZhangHaopeng ZhangZhiguo Jiang
Lian ZhouC. HeDaosheng WANGZiqi Guo
Tianyang ZhangXiangrong ZhangPeng ZhuXiuping JiaXu TangLicheng Jiao
Wuzhou LiJiawei ZhouLi XiangYi CaoGuang JinXuemin Zhang
Wei WuC. JiangLiao YangWeisheng WangQuanjun ChenJunjian ZhangHaiping YangZuohui Chen