CASCADED CONTEXT DEPENDENCY: AN EXTREMELY LIGHTWEIGHT MODULE FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
Xu Ma, Zhinan Qiao, Jingda Guo, Sihai Tang, Qi Chen, Qing Yang, Song Fu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:14
In this paper, we present a cascaded context dependency module which is a highly lightweight module that can improve the performance of deep convolutional neural networks for various visual tasks. Inspired by the feature pyramid work in object detection and the context dependency work in image recognition, we consider to cascade the contexts of multi-scaled feature maps to aggregate the locality and globality in a local region. We further extract the dependency between original input and cascaded contexts for feature re-calibration. Without employing learnable layers, our method introduces almost no additional parameters and computations. Furthermore, Our building module can be seamlessly plugged into many existing CNN architectures to improve the performance. Experiments on ImageNet and MS COCO benchmarks indicate that our method can achieve results on par with or better than related work. Qualitatively, we achieve an absolute 1.42% (77.3137% vs. 75.8974%) top-1 classification accuracy improvement based on ResNet50 on ImageNet 2012 validation set with negligible computational overhead. Besides, our method yields significant gains on the MS COCO benchmark for the object detection task. All codes and models are made publicly available.