Abstract

Summary The field of seismic analysis has a large amount of data, however, annotating all the images is challenging and time-consuming. Therefore, the use of self-supervised learning provides the possibility to pre-train the models with the vast unlabelled dataset and then use the labelled ones for the downstream tasks. To take advantage of that, this work explores the Masked Autoencoders self-supervised method, comparing the efficacy of Vision Transformer (ViT) architectures (ViT-Small, ViT-Large). We investigate the model performance on subsets of labelled seismic facies data for fine-tuning using a Segmentation Transformer for segmentation. We also compare the results of segmentation when using two different pre-trained ViTs: supervised pre-train with the ImageNet and self-supervised pre-train with a seismic dataset. The ViT-Small and ViT-Large exhibit similar metric values. However, the ViT-Small has a shorter training time. The pre-trained ViT using the seismic dataset achieves superior performance for the different percentages of the labelled dataset, especially with fewer data, which indicates that the seismic pre-training generalizes better results of segmentation of seismic data compared to the one pre-trained with ImageNet, which demonstrates the benefits of the pre-train for the seismic data analysis.

Keywords:
Segmentation Computer science Transformer Artificial intelligence Supervised learning Machine learning Labeled data Pattern recognition (psychology) Metric (unit) Semi-supervised learning Performance metric Data mining Artificial neural network Engineering

Metrics

1
Cited By
1.51
FWCI (Field Weighted Citation Impact)
0
Refs
0.65
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Seismic Imaging and Inversion Techniques
Physical Sciences →  Earth and Planetary Sciences →  Geophysics
Drilling and Well Engineering
Physical Sciences →  Engineering →  Ocean Engineering
Hydraulic Fracturing and Reservoir Analysis
Physical Sciences →  Engineering →  Mechanical Engineering

Related Documents

JOURNAL ARTICLE

SELF-SUPERVISED VISION TRANSFORMERS FOR CROSS-MODAL LEARNING (REVIEW)

Olena StankevychDanylo Matviikiv

Journal:   Computer Design Systems Theory and Practice Year: 2025 Vol: 7 (1)Pages: 37-51
JOURNAL ARTICLE

Multi-level Contrastive Learning for Self-Supervised Vision Transformers

Shentong MoZhun SunChao Li

Journal:   2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Year: 2023 Pages: 2777-2786
JOURNAL ARTICLE

Patch-level Representation Learning for Self-supervised Vision Transformers

Sukmin YunHankook LeeJaehyung KimJinwoo Shin

Journal:   2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Year: 2022 Pages: 8344-8353
© 2026 ScienceGate Book Chapters — All rights reserved.