Hiding Images into Images With Real-World Robustness
Qichao Ying, Hang Zhou, Xianhan Zeng, Haisheng Xu, Zhenxing Qian, Xinpeng Zhang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:52
Visual grounding aims to localize a target object in an image based on a given text description. Due to the innate complexity of language, it is still a challenging problem to perform reasoning of complex expressions and to infer the underlying relationship between the expression and the object in an image. To address these issues, we propose a residual graph attention network for visual grounding. The proposed approach first builds an expression-guided relation graph and then performs multi-step reasoning followed by matching the target object. It allows performing better visual grounding with complex expressions by using deeper layers than other graph network approaches. Moreover, to increase the diversity of training data, we perform an expression-respect data augmentation based on copy-paste operations to pairs of source and target images. The proposed approach achieves better performance with extensive experiments than other state-of-the-art graph network-based approaches and demonstrates its effectiveness.