MULTI-SCALE REINFORCEMENT LEARNING STRATEGY FOR OBJECT DETECTION
Yihao Luo, Xiang Cao, Juntao Zhang, Leixilan Pan, Tianjiang Wang, Qi Feng
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:21
Feature Pyramid Network (FPN) has become a common detection paradigm by improving multi-scale features with strong semantics. However, most FPN-based methods typically treat each feature map equally and sum the loss without distinction, which might lead to suboptimal overall performance. In this paper, we propose a Multi-scale Reinforcement Learning Strategy (MRLS) for balanced multi-scale training. First, we design Dynamic Feature Fusion (DFF) to dynamically magnify the impact of more important feature maps in FPN. Second, we introduce Compensatory Scale Training (CST) to enhance the supervision of the under-training scale. We regard the whole detector as a reinforcement learning system while the state bases on multi-scale loss. And we develop the corresponding action, reward, and policy. Compared with adding more rich model architectures, MRLS would not add any extra modules and computational burdens on the baselines. Experiments on MS COCO and PASCAL VOC benchmark demonstrate that our method significantly improves the performance of commonly used object detectors.