Image-Assisted Transformer In Zero-Resource Multi-Modal Translation
Ping Huang, Shiliang Sun, Hao Yang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:53
Humans learn language speaking and translation with the help of common knowledge of the external world, while standard machine translation depends only on parallel corpora. Therefore, zero-resource translation has been proposed to explore a way to make models learn to translate without any parallel corpora but with external knowledge. Current models in the field usually combine an image encoder with a textual decoder, which leads to extra efforts to make use of the original model in machine translation. On the other hand, Transformers have achieved great progress in neural language processing while they are rarely utilized in the field of zero-resource translation with image pivot. In this paper, we investigate how to use visual information as an auxiliary hint for a Transformer-based system in a zero-resource translation scenario. Our model achieves state-of-the-art BLEU scores in the field of zero-resource machine translation with image pivot.
Chairs:
Bhuvana Ramabhadran