DMFormer: Closing the Gap between CNN and Vision Transformers
Zimian Wei (School of Computer Science, National University of Defense Technology); Hengyue Pan (National University of Defense Technology); Lujun Li (Chinese Academy of Sciences); MengLong Lu (National University of Defense Technology); Xin Niu (NUDT); Peijie Dong (School of Computer Science, National University of Defense Technology); Dongsheng Li (School of Computer Science, National University of Defense Technology)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Vision transformers have shown excellent performance in computer vision tasks. As the computation cost of their self-attention mechanism is expensive, recent works tried to replace the self-attention mechanism in vision transformers with convolutional operations, which is more efficient with built-in inductive bias. However, these efforts either ignore multi-level features or lack dynamic prosperity, leading to sub-optimal performance. In this paper, we propose a Dynamic Multi-level Attention mechanism (DMA), which captures different patterns of input images by multiple kernel sizes and enables input-adaptive weights with a gating mechanism. Based on DMA, we present an efficient backbone network named DMFormer. DMFormer adopts the overall architecture of vision transformers, while replacing the self-attention mechanism with our proposed DMA. Extensive experimental results on ImageNet-1K and ADE20K datasets demonstrated that DMFormer achieves state-of-the-art performance, which outperforms similar-sized vision transformers(ViTs) and convolutional neural networks (CNNs).