JOURNAL ARTICLE

Self-Supervised Point Cloud Understanding via Mask Transformer and Contrastive Learning

Di WangZhi-Xin Yang

Year: 2022 Journal:   IEEE Robotics and Automation Letters Vol: 8 (1)Pages: 184-191   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Self-supervised point cloud understanding can pre-train the point cloud learning network on a large dataset, which helps boost the performance of fine-tuning on other smaller datasets in downstream tasks. Motivated to design an efficient self-supervised pre-training strategy and capture useful and discriminative representations of the 3D point cloud, we propose ContrastMPCT, a self-reconstruction scheme with the contrastive learning principle. Specifically, two contrastive loss functions are designed for 3D point clouds to maximize the dependence between the input tokens and output tokens of the encoder and fasten the convergence of the model. Extensive experiments show that our pre-training strategy of ContrastMPCT can effectively improve the fine-tuning performance on the downstream tasks, including object classification and part segmentation. Moreover, compared with both CNN-based and Transformer-based existing works, the superior results indicate the efficacy of the proposed method.

Keywords:
Computer science Point cloud Transformer Segmentation Discriminative model Encoder Artificial intelligence Machine learning Pattern recognition (psychology)

Metrics

8
Cited By
2.03
FWCI (Field Weighted Citation Impact)
73
Refs
0.77
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

3D Shape Modeling and Analysis
Physical Sciences →  Engineering →  Computational Mechanics
3D Surveying and Cultural Heritage
Physical Sciences →  Earth and Planetary Sciences →  Geology
Optical measurement and interference techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.