JOURNAL ARTICLE

Image Super Resolution in Real World Using Variational Auto Encoder

Abstract

Most of the existing super-resolution methods trained only by simulated datasets are difficult to achieve good performance in real-world scenes. Besides, it is difficult to obtain well-aligned real-world image pairs between high-resolution and low-resolution spaces for training. To tackle this problem, we proposed a novel super-resolution framework based on variational auto encoder. In particular, we firstly utilized a variational auto encoder to map the degraded low-resolution images and the real-world low-resolution images to the same latent space. Meanwhile, the high-quality images were mapped to another latent space by another variational auto encoder. An additional convolutional neural network was used to learn the mapping between the two latent spaces. After that, the information in the mapped latent space was decoded and the high-resolution images were reconstructed by the decoder. We have compared the performance of our proposed method and those of state-of-the-art methods including SRGAN., ESRGAN., and CycleGAN algorithms. The experimental results demonstrate that the proposed method outperforms the above methods in the super-resolved task in real world.

Keywords:
Computer science Artificial intelligence Encoder Autoencoder Convolutional neural network Image resolution Computer vision Resolution (logic) Image (mathematics) Space (punctuation) Pattern recognition (psychology) Deep learning Algorithm

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
7
Refs
0.19
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Advanced Image Processing Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image and Signal Denoising Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.