JOURNAL ARTICLE

LQCANet: Learnable-Query-Guided Multi-Scale Fusion Network Based on Cross-Attention for Radar Semantic Segmentation

Long ZhuangTiezhen JiangHao JiangAnqi WangZhixiang Huang

Year: 2023 Journal:   IEEE Transactions on Intelligent Vehicles Vol: 9 (2)Pages: 3330-3344   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Millimeter-wave radar semantic segmentation has proven successful in autonomous driving environment perception tasks. However, relying solely on range-angle (RA) images excludes Doppler information, which is crucial for dynamic target recognition. Despite various fusion method proposals, they consistently encounter issues with interfering RA information and poor fusion results. We introduce a novel learnable-query-guided multi-scale fusion network (LQCANet) for radar semantic segmentation, leveraging learnable-query for effective multi-scale cross-attention fusion. The cross-attention fusion module (CAF) initializes queries randomly, interacting with range-Doppler (RD) and angle-Doppler (AD) information via multi-layer cross-attention. Subsequently, the original RA features integrate with the generated queries to achieve multi-scale feature fusion. This approach prevents interference with RA information and ensures efficient fusion. Additionally, for enhanced feature extraction capabilities, this study introduces the pointwise multi-head self-attention down module (PMD), integrating a convolutional neural network (CNN) and a Transformer to extract both local and global features. Furthermore, pointwise convolution serves as an implicit positional coding method, addressing the limitation that explicit positional coding is not applicable to millimeter-wave radar images. Experiments demonstrate the superior performance of the proposed LQCANet on the CARRADA dataset. In comparison to the state-of-the-art (SOTA) fusion-network TMVA-Net, the LQCANet exhibits a significant improvement of 2.7 in mean intersection over union (mIoU) and 3.4 in mean dice similarity coefficient (mDice), while maintaining a computational complexity of only 27% (27.6 GFLOPs) of TMVA-Net. LQCANet achieves a superior trade-off between detection accuracy and speed, rendering it more suitable for environment perception tasks. Our code is available at https://github.com/Zhuanglong2/LQCANet .

Keywords:
Computer science Pointwise Artificial intelligence Convolutional neural network Radar Segmentation Fusion Feature (linguistics) Coding (social sciences) Computer vision Pattern recognition (psychology) Telecommunications Mathematics

Metrics

7
Cited By
3.64
FWCI (Field Weighted Citation Impact)
75
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced SAR Imaging Techniques
Physical Sciences →  Engineering →  Aerospace Engineering
Geophysical Methods and Applications
Physical Sciences →  Engineering →  Ocean Engineering
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

TAG-fusion: Two-stage attention guided multi-modal fusion network for semantic segmentation

Zhizhou ZhangWenwu WangLei ZhuZhibin Tang

Journal:   Digital Signal Processing Year: 2024 Vol: 156 Pages: 104807-104807
JOURNAL ARTICLE

Lightweight multi-scale attention-guided network for real-time semantic segmentation

Xuegang HuYuanjing Liu

Journal:   Image and Vision Computing Year: 2023 Vol: 139 Pages: 104823-104823
JOURNAL ARTICLE

Dual Attention Based Multi-scale Feature Fusion Network for Indoor RGBD Semantic Segmentation

Zhongwei HuaLizhe QiDaming DuWenxuan JiangYunquan Sun

Journal:   2022 26th International Conference on Pattern Recognition (ICPR) Year: 2022 Pages: 3639-3644
JOURNAL ARTICLE

Attention Guided Multi Scale Feature Fusion Network for Automatic Prostate Segmentation

Yuchun LiMengxing HuangYu ZhangZhiming Bai

Journal:   Computers, materials & continua/Computers, materials & continua (Print) Year: 2024 Vol: 78 (2)Pages: 1649-1668
© 2026 ScienceGate Book Chapters — All rights reserved.