-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:01:51
High-quality (HQ) microscopy images afford more detailed information for modern life science research and quantitative image analyses. However, in practice, HQ microscopy images are not commonly available or suffer from blurring artifacts. Compared with natural images, such low-quality (LQ) microscopy images usually share some visual characteristics: repeating patterns, more complex structures, and less informative background. Despite the promising performance of deep convolutional neural networks (CNN) for natural image deblurring, they often suffer from large model sizes, heavy computation costs, or small throughput, which are critical for high-throughput microscopy image deblurring. To address those problems, we collect HQ electron microscopy and histology datasets and propose a graph reasoning attention network (GRAN). Specifically, we treat deep feature points as embedded visual components, build a graph describing the relationship between all pairs of visual components, and perform reasoning in the graph with a graph convolutional network. The reasoning results are then transferred as attention and residual learning is introduced to achieve graph reasoning attention block (GRAB). Extensive experiments show the effectiveness of our proposed GRAN and our two collected datasets will be made public to benefit the community.