JOURNAL ARTICLE

Dlafnet: A Direct Fusion Method of 2D Aerial Image and 3D Lidar Point Cloud for Semantic Segmentation

Abstract

Semantic segmentation of high-resolution remote sensing images (RSIs) is developing rapidly. Multispectral images can provide rich spectral information for semantic segmentation, while 3D LiDAR point cloud data can provide depth information. Thus, semantic segmentation accuracy could be improved by fusing multispectral images and 3D LiDAR point cloud. In this paper, we propose a method titled Direct LiDAR-Aerial Fusion Network (DLAFNet) which directly uses RSIs and LiDAR point cloud for semantic segmentation tasks. In particular, owing to the fact that sparse features extracted from the KPConv branch are not as essential as features from RSIs, we design LiDAR Assisted Attention Module (L-AAM). Our experiments on the modified GRSS18 dataset prove that our method is proper and can obtain the best results by comparing with its components and other methods.

Keywords:
Lidar Point cloud Segmentation Multispectral image Remote sensing Computer science Artificial intelligence Point (geometry) Image segmentation Computer vision Fusion Geology Mathematics

Metrics

1
Cited By
0.16
FWCI (Field Weighted Citation Impact)
12
Refs
0.44
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Remote Sensing and LiDAR Applications
Physical Sciences →  Environmental Science →  Environmental Engineering
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
© 2026 ScienceGate Book Chapters — All rights reserved.