Encoder-Recurrent Decoder Network For Single Image Dehazing
An Dang, Toan Vu, Jia-Ching Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 15:33
This paper develops a deep learning model, called Encoder-Recurrent Decoder Network (ERDN), which recovers the clear image from a degrade hazy image without using the atmospheric scattering model. The proposed model consists of two key components- an encoder and a decoder. The encoder is constructed by a residual efficient spatial pyramid (rESP) module such that it can effectively process hazy images at any resolution to extract relevant features at multiple contextual levels. The decoder has a recurrent module which sequentially aggregates encoded features from high levels to low levels to generate haze-free images. The network is trained end-to-end given pairs of hazy-clear images. Experimental results on the RESIDE-Standard dataset demonstrate that the proposed model achieves a competitive dehazing performance compared to the state-of-the-art methods in term of PSNR and SSIM.