Skip to main content


Yihao Luo, Xiang Cao, Juntao Zhang, Tianjiang Wang, Qi Feng, Peng Cheng

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:33
12 May 2022

It is a common paradigm in object detection frameworks to perform multi-scale detection. However, each scale is treated equally during training. In this paper, we carefully study the objective imbalance of multi-scale detector training. We argue that the loss in each scale is neither equally important nor independent. Different from the existing solutions of setting fixed multi-task weights, we dynamically optimize the loss weight of each scale in the training process. Specifically, we propose an Adaptive Variance Weighting (AVW) to balance multi-scale loss according to the statistical variance. Then we develop a novel Reinforcement Learning Optimization (RLO) to decide the weighting scheme probabilistically during training. The proposed dynamic methods make better utilization of multi-scale training loss without extra computational complexity and learnable parameters for backpropagation. Experiments on Pascal VOC and MS COCO benchmark validate the effectiveness of our proposed methods.

More Like This