PAIR DETR: TOWARD FASTER CONVERGENT DETR
Seyed mehdi Iranmanesh (Amazon); Sherry X Chen (University of California, Santa Barbara); Kuo-Chin Lien (Appen)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The DETR object detection approach applies the Transformer encoder and decoder architecture to detect objects from images and achieves promising performance. In this paper, we present a simple approach to address the main problem of DETR, the slow convergence, by using representation learning technique. In this approach, we detect an object bounding box as a pair of keypoints, the top-left corner and the center, using two decoders. By detecting objects as paired keypoints, the model builds up a joint classification and pair association on the output queries from two decoders. For the pair association we propose utilizing contrastive self-supervised learning algorithm without requiring specialized architecture. Experimental results on MS COCO dataset show that Pair DETR can reduce training epoch by 10x from original DETR and 1.5x from Conditional DETR, while having consistently higher Average Precision scores.