JOURNAL ARTICLE

A Simple Masked Autoencoder Paradigm for Point Cloud

Abstract

Unsupervised pre-training is a promising approach to address the problem of laborious manual annotation, which has attracted great attention in 3D point clouds. Recent works focus on corruption-reconstruction methods that corrupt the input data first and then learn to reconstruct the uncorrupted data, but they still lack simplicity and generality. To solve this problem, we have simplified traditional unsupervised methods for point clouds. We propose MPE, a paradigm that is based on group Masked for Point cloud autoEncoder, which is simple to be implemented and can be applied to various model architectures. Specifically, 1) MPE adopts a random group mask to corrupt the input cloud data for reconstruction learning. 2) Various model architectures, like CNN, Edge- Conv, Attention, or the hybrid of them, can be pre-trained under this strategy. 3) A lightweight prediction head acts as a decoder and performs better than heavier ones. The pre-trained models can be used as a great initialization for different downstream tasks like classification and segmentation. Extensive experiments demonstrate that the proposed method can effectively improve the performance of various models. Code is available at https://github.com/zixiangro/MPE.

Keywords:
Computer science Autoencoder Artificial intelligence Initialization Point cloud Focus (optics) Generality Deep learning Cloud computing Simple (philosophy) Machine learning Code (set theory) Unsupervised learning Point (geometry)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
30
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

3D Shape Modeling and Analysis
Physical Sciences →  Engineering →  Computational Mechanics
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
Image Processing and 3D Reconstruction
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.