Qiongdan HuangLiang LiM. ZhaoJiapeng WangSeokyoon Kang
Effective discriminative spectral-spatial feature representation is crucial for hyperspectral image classification (HSIC). Some current methods typically extract spectral and spatial information directly from spectral-spatial 3D patches, without considering the correlation between features, resulting in a high number of misclassifications at the boundaries of land cover classes. This article proposed a spectral-spatial two-branch feature fusion network (TFFN). The spatial branch utilizes distance similarity metrics to capture the spatial relationships between central and neighboring pixels, and utilizes multiscale convolutional modules to expand the receptive field, capturing different levels of features and contextual information, resulting in more robust spatial information. The spectral branch utilizes a bidirectional long short-term memory (Bi-LSTM) network and linear attention mechanism to capture spectral features. In the end, the fused feature information from both branches serves as the basis for classification, enabling high-precision categorization. Experimental results on the datasets of four public demonstrate that the overall classification accuracy of the TFFN model exceeds 97%, especially on the Indian Pines dataset with an imbalanced distribution of ground objects.
Hanjie WuDan LiYujian WangXiaojun LiFanqiang KongQiang Wang
Lanxue DangLibo WengYane HouXianyu ZuoYang Liu
Hongmin GaoYao YangSheng LeiChenming LiHui ZhouXiaoyu Qu