OAFormer: Learning Occlusion Distinguishable Feature for Amodal Instance Segmentation
Zhixuan Li (Peking University); Ruohua Shi (Peking University); Tiejun Huang (Peking University); Tingting Jiang (Peking University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The Amodal Instance Segmentation (AIS) task aims to infer the complete mask of occluded instance. Under many circumstances, existing methods treat occluded objects as unoccluded ones, and vice versa, leading to inaccurate predictions. This is because existing AIS methods do not explicitly utilize the occlusion rates of each object as supervision. However, occlusion information is critical for the methods to recognize whether the target objects are occluded. Hence we believe it is vital for the method to be distinguishable about the degree of occlusion for each instance. In this paper, a simple yet effective Occlusion-aware transformer-based model, OAFormer, is proposed for accurate amodal instance segmentation. The goal of OAFormer is to learn the occlusion discriminative features. Novel components are proposed to enable OAFormer to be occlusion distinguishable. We conduct extensive experiments on two challenging AIS datasets to evaluate the effectiveness of our method. OAFormer outperforms state-of-the-art methods by large margins.