JOURNAL ARTICLE

ATLAS-MVSNet: Attention Layers for Feature Extraction and Cost Volume Regularization in Multi-View Stereo

Rafael WeilharterFriedrich Fraundorfer

Year: 2022 Journal:   2022 26th International Conference on Pattern Recognition (ICPR) Pages: 3557-3563

Abstract

We present ATLAS-MVSNet, an end-to-end deep learning architecture relying on local attention layers for depth map inference from multi-view images. Distinct from existing works, we introduce a novel module design for neural networks, which we termed hybrid attention block, that utilizes the latest insights into attention in vision models. We are able to reap the benefits of attention in both, the carefully designed multi-stage feature extraction network and the cost volume regularization network. Our new approach displays significant improvement over its counterpart based purely on convolutions. While many state-of-the-art methods need multiple high-end GPUs in the training phase, we are able to train our network on a single consumer grade GPU. ATLAS-MVSNet exhibits excellent performance, especially in terms of accuracy, on the DTU dataset. \nFurthermore, ATLAS-MVSNet ranks amongst the top published methods on the online Tanks and Temples benchmark.

Keywords:
Atlas (anatomy) Computer science Inference Regularization (linguistics) Artificial intelligence Deep learning Benchmark (surveying) Feature extraction Deep neural networks Architecture Network architecture Artificial neural network Machine learning Pattern recognition (psychology) Cartography

Metrics

9
Cited By
0.62
FWCI (Field Weighted Citation Impact)
48
Refs
0.76
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.