RETIFORMER: RETINEX-BASED ENHANCEMENT IN TRANSFORMER FOR LOW-LIGHT IMAGE
Junxiang Ruan (Tsinghua University); Xiangtao Kong (SIAT); Wenqi Huang (China southern power grid); Wenming Yang (Tsinghua University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Transformer-based methods have shown impressive potential in many low-level vision tasks but are rarely used for low-light image enhancement (LLIE). Direct use of Transformer in LLIE will bring unnatural visual effects. This phenomenon encourages us to attempt to learn from the theory of Retinex. After trial and analysis, we finally propose Retiformer. Retiformer decomposes images into reflectance and illumination attention maps by Retinex Window Self-Attention (R-WSA). It will replace element-wise multiplication with the attention mechanism. By the R-WSA, we respectively apply a Decom-Retiformer block and an Enhance-Retiformer block at the head and tail of a Transformer-based backbone. They can decompose and align the reflection and illumination components just like RetinexNet. With this pipeline, Retiformer combines the advantages of Transformer and Retinex theory and achieves state-of-the-art performance of Retinex-based methods.