COUPLED FEATURE LEARNING VIA STRUCTURED CONVOLUTIONAL SPARSE CODING FOR MULTIMODAL IMAGE FUSION
Farshad G. Veshki, Sergiy A. Vorobyov
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:30
A novel method for learning correlated features in multimodal images based on convolutional sparse coding with applications to image fusion is presented. In particular, the correlated features are captured as coupled filters in convolutional dictionaries. At the same time, the shared and independent features are approximated using separate convolutional sparse codes and a common dictionary. The resulting optimization problem is addressed using alternating direction method of multipliers. The coupled filters are fused based on a maximum-variance rule, and a maximum-absolute-value rule is used to fuse the sparse codes. The proposed method does not entail any prelearning stage. The experimental evaluations using medical and infrared-visible image datasets demonstrate the superiority of our method compared to state-of-the-art algorithms in terms of preserving the details and local intensities as well as improving objective metrics.