Affinity Learning with Blind-spot Self-Supervision for Image Denoising
Yuhongze Zhou (McGill University); Liguang Zhou (The Chinese University of Hong Kong, Shenzhen); Issam Hadj Laradji (ServiceNow); Tin Lun Lam (The Chinese University of Hong Kong, Shenzhen); Yangsheng Xu (Shenzhen Institute of Artificial Intelligence and Robotics for Society)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
In this paper, we extend the blind-spot based self-supervised denoising by using affinity learning to remove noise from affected pixels. Inspired by inpainting, we introduce a novel Mask Guided Residual Convolution (MGRConv) to learn a neighboring image pixel affinity map that gradually removes noise and refines blind-spot denoising process. We show that mask convolution plays an important role in blind-spot denoising since it is theoretically aligned with $\mathcal{J}-invariance$, which blind-spot based self-supervised denoising frameworks are built upon. The theoretical analysis further shows the motivation behind using more adaptive mask convolutions. Our MGRConv not only enables dynamic mask learning without external trainable parameters, but also preserves appropriate mask constraints by sigmoid activation and residual summation. Our MGRConv is a balance between partial convolution and learnable attention maps, and boosts denoising performance better than other inpainting convolutions with similar or even less parameters, memory, and training/inference time. Extensive experiments show that our proposed plug-and-play MGRConv can assist blind-spot based denoising networks to reach promising results on both existing single-image based and dataset based benchmarks.