JOURNAL ARTICLE

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds

Abstract

We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Extensive experiments show that our RandLA-Net can process 1 million points in a single pass with up to 200x faster than existing approaches. Moreover, our RandLA-Net clearly surpasses state-of-the-art approaches for semantic segmentation on two large-scale benchmarks Semantic3D and SemanticKITTI.

Keywords:
Point cloud Computer science Segmentation Semantics (computer science) Scale (ratio) Sampling (signal processing) Point (geometry) Artificial intelligence Feature (linguistics) Key (lock) Point process Field (mathematics) Net (polyhedron) Pattern recognition (psychology) Data mining Computer vision Mathematics Geography

Metrics

1870
Cited By
198.86
FWCI (Field Weighted Citation Impact)
110
Refs
1.00
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

3D Shape Modeling and Analysis
Physical Sciences →  Engineering →  Computational Mechanics
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.