JOURNAL ARTICLE

3DMAX-Net: A Multi-Scale Spatial Contextual Network for 3D Point Cloud Semantic Segmentation

Abstract

Semantic segmentation of 3D scenes is a fundamental problem in 3D computer vision. In this paper, we propose a deep neural network for 3D semantic segmentation of raw point clouds. A multi-scale feature learning block is first introduced to obtain informative contextual features in 3D point clouds. A global and local feature aggregation block is then extended to improve the feature learning ability of the network. Based on these strategies, a powerful architecture named 3DMAX-Net is finally provided for semantic segmentation in raw 3D point clouds. Experiments have been conducted on the Stanford large-scale 3D Indoor Spaces Dataset using only geometry information. Experimental results have clearly shown the superiority of the proposed network.

Keywords:
Point cloud Computer science Segmentation Block (permutation group theory) Feature (linguistics) Artificial intelligence Scale (ratio) Point (geometry) Deep learning Computer vision Semantic feature Semantics (computer science) Pattern recognition (psychology) Geography Cartography

Metrics

25
Cited By
3.39
FWCI (Field Weighted Citation Impact)
31
Refs
0.91
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

3D Shape Modeling and Analysis
Physical Sciences →  Engineering →  Computational Mechanics
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Remote Sensing and LiDAR Applications
Physical Sciences →  Environmental Science →  Environmental Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.