With the development of hyperspectral sensors, there is an increasing amount of accessible hyperspectral data, and the classification task for land cover categories has gained significant attention. Existing classification methods typically extract features from either the pixel or superpixel perspective. However, using a single-scale feature extraction approach fails to simultaneously consider both local and global features of land cover, leading to suboptimal classification results. To address this issue, this paper proposes a parallel graph attention network model based on pixel and superpixel feature fusion (SSPGAT) for hyperspectral image classification, which leverages the fusion of pixel-level and superpixel-level features. The proposed approach first employs spectral convolutional layers to reduce the redundant spectral dimension. Then, it utilizes graph attention network (GAT) to extract local and global features of land cover separately from the pixel and superpixel perspectives. Finally, a fully connected network is employed to classify the fused features from both branches. Experimental results on two different datasets demonstrate the effectiveness of the proposed approach.
Qichao LiuLiang XiaoJingxiang YangZhihui Wei
Zhi GongLei TongJun ZhouBin QianLijuan DuanChuangbai Xiao
Xueqin WangWenhui GuoXinru FanYanjiang Wang
Uzair Aslam BhattiMengxing HuangHarold Neira-MolinaShah MarjanMehmood BaryalaiHao TangGuilu WuSibghat Ullah Bazai
Pan QikunXiaoxi XuChang QiPan ChundiGuo Cao