JOURNAL ARTICLE

Denoising Self-Distillation Masked Autoencoder for Self-Supervised Learning

Jiashu XuSergii Stirenko

Year: 2023 Journal:   International Journal of Image Graphics and Signal Processing Vol: 15 (5)Pages: 29-38

Abstract

Self-supervised learning has emerged as an effective paradigm for learning universal feature representations from vast amounts of unlabeled data. It’s remarkable success in recent years has been demonstrated in both natural language processing and computer vision domains. Serving as a cornerstone of the development of large-scale models, self-supervised learning has propelled the advancement of machine intelligence to new heights. In this paper, we draw inspiration from Siamese Networks and Masked Autoencoders to propose a denoising self-distilling Masked Autoencoder model for Self-supervised learning. The model is composed of a Masked Autoencoder and a teacher network, which work together to restore input image blocks corrupted by random Gaussian noise. Our objective function incorporates both pixel-level loss and high-level feature loss, allowing the model to extract complex semantic features. We evaluated our proposed method on three benchmark datasets, namely Cifar-10, Cifar-100, and STL-10, and compared it with classical self-supervised learning techniques. The experimental results demonstrate that our pre-trained model achieves a slightly superior fine-tuning performance on the STL-10 dataset, surpassing MAE by 0.1%. Overall, our method yields comparable experimental results when compared to other masked image modeling methods. The rationale behind our designed architecture is validated through ablation experiments. Our proposed method can serve as a complementary technique within the existing series of self-supervised learning approaches for masked image modeling, with the potential to be applied to larger datasets.

Keywords:
Computer science Artificial intelligence Autoencoder Benchmark (surveying) Machine learning Pattern recognition (psychology) Noise reduction Supervised learning Deep learning Feature (linguistics) Feature learning Unsupervised learning Noise (video) Image (mathematics) Artificial neural network

Metrics

1
Cited By
0.26
FWCI (Field Weighted Citation Impact)
53
Refs
0.58
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Cancer-related molecular mechanisms research
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Cancer Research
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.