RANSP: Ranking Attention Network for Saliency Prediction on Omnidirectional Images
dandan zhu, yongqing chen, Tian Han, defang zhao, Yucheng Zhu, qiangqiang zhou, Guangtao Zhai, Xiaokang Yang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 05:19
Various convolutional neural network (CNN)-based methods have shown the ability to boost the performance of saliency prediction on omnidirectional images (ODIs). However, these methods are limited by sub-optimal accuracy, because not all the features extracted by the CNN model are not useful for the final fine-grained saliency prediction. Features are redundant and have negative impact on the final fine-grained saliency prediction. To tackle this problem, we propose a novel Ranking Attention Network for saliency prediction (RANSP) of head fixations on ODIs. Specifically, the part-guided attention (PA) module and channel-wise feature (CF) extraction module are integrated in a unified framework and are trained in an end-to-end manner for fine-grained saliency prediction. To better utilize the channel-wise feature map, we further propose a new Ranking Attention Module (RAM), which automatically ranks and selects these maps based on scores for fine-grained saliency prediction. Extensive experiments are conducted to show the effectiveness of our method for saliency prediction of ODIs.