Channel Attention Based Generative Network For Robust Visual Tracking
Ying Hu, Jian Yang, Yan Yan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:35
In recent years, Siamese trackers have achieved great success in visual tracking. Siamese networks can achieve competitive performance in both accuracy and speed. However, they may suffer from the performance degradation due to the case of large pose variations, out-of-plane, etc. In this paper, we propose a novel real-time Channel Attention based Generative Network (AGSNet) for Robust Visual Tracking. AGSNet can better recognize the targets undergoing significant appearance variations and having similar distractors. The AGSNet model introduces a channel favored feature attention to the template branch to enhance the discriminative capacity and uses a simple generative network in the instance branch to capture a variety of target appearance changes. With the end-to-end offline training, our model can achieve robust visual tracking in a long temporal span. Experimental results on benchmark datasets OTB-2013 and OTB-2015, demonstrate that our proposed tracker outperforms other approaches while runs at more than 40 frames per second.