PRIOR VISUAL RELATIONSHIP REASONING FOR VISUAL QUESTION ANSWERING
Zhuoqian Yang, Zengchang Qin, Jing Yu, Tao Wan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:05
Visual Question Answering (VQA) is a representative task of cross-modal reasoning where an image and a free-form question in natural language are presented and the correct answer needs to be determined by using both visual and textual information. One of the key issues of VQA is to reason with semantic clues in the visual content under the guidance of the question. In this paper, propose Scene Graph Convolutional Network (SceneGCN) to jointly reason the object properties and their semantic relations for the correct answer. The visual relationship is projected into a deep learned semantic space constrained by visual context and language priors. Based on comprehensive experiments on two challenging datasets: GQA and VQA 2.0, we demonstrate the effectiveness and interpretability of the new model.