DUAL-HEAD FUSION NETWORK FOR IMAGE ENHANCEMENT
Yuhong Zhang (Shanghai Jiao Tong University); Hengsheng Zhang (Shanghai Jiao Tong University); Li Song (Shanghai Jiao Tong University); Rong Xie (Shanghai Jiao Tong University); Wenjun Zhang (Shanghai Jiao Tong University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Image enhancement algorithms have made great progress recently. However, most existing methods tend to construct a uniform enhancer for the color transformation of all pixels and ignore the local context information which is significant for photographs, causing unsatisfactory results. To solve these issues, we propose a novel dual-head fusion network for image enhancement, which synthetically considers both global scenario and local content information. Our network consists of four lightweight modules. We first develop a dual-head feature extraction module to extract the global condition vector and spatial context map. After that, we propose a context-aware retouching module and a global color rendering module to generate latent results. Finally, we employ the spatial attention based fusion module to adaptively aggregate the latent results. Experiments on public datasets show that our method consistently achieves the best results compared with SOTA methods both subjectively and objectively.