JOURNAL ARTICLE

Retiformer: Retinex-Based Enhancement In Transformer For Low-Light Image

Abstract

Transformer-based methods have shown impressive potential in many low-level vision tasks but are rarely used for low-light image enhancement (LLIE). Direct use of Transformer in LLIE will bring unnatural visual effects. This phenomenon encourages us to attempt to learn from the theory of Retinex. After trial and analysis, we finally propose Retiformer. Retiformer decomposes images into reflectance and illumination attention maps by Retinex Window Self-Attention (R-WSA). It will replace element-wise multiplication with the attention mechanism. By the R-WSA, we respectively apply a Decom-Retiformer block and an Enhance-Retiformer block at the head and tail of a Transformer-based backbone. They can decompose and align the reflection and illumination components just like RetinexNet. With this pipeline, Retiformer combines the advantages of Transformer and Retinex theory and achieves state-of-the-art performance of Retinex-based methods.

Keywords:
Color constancy Computer science Artificial intelligence Transformer Computer vision Image (mathematics) Engineering Voltage

Metrics

4
Cited By
0.73
FWCI (Field Weighted Citation Impact)
23
Refs
0.65
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Image Enhancement Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image Fusion Techniques
Physical Sciences →  Engineering →  Media Technology
Advanced Image Processing Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.