Image Inpainting with Semantic-aware Transformer
Shiyu Chen (Southwest University of Science and Technology); Wenxin Yu (Southwest University of Science and Technology); Qi Wang (Southwest University of Science and Technology); Jun Gong (Beijing Institute of Technology); Peng Chen (Chengdu Hongchengyun Technology Co., Ltd)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Image inpainting has made huge strides benefiting from the advantages of convolutional neural networks (CNNs) in understanding high-level semantics. Recently, some studies have applied transformers to the visual field to solve the problem that the convolution kernel cannot attend to long-distance information. However, unlike other vision tasks, there is much interference from damaged information in image inpainting tasks. We propose a new Semantic-Aware Transformer, which in addition to including a self-attention block like previous vision transformers, also has a block for learning semantics from QSVM. Specifically, to provide more valid information, we design a Quantized Semantic Vector Memory (QSVM) that encodes and saves semantic features in images as quantized vectors in latent space. Experiments on different datasets demonstrate the effectiveness and superiority of our method compared with the existing state-of-the-art.