Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:53
09 Jun 2021

Humans learn language speaking and translation with the help of common knowledge of the external world, while standard machine translation depends only on parallel corpora. Therefore, zero-resource translation has been proposed to explore a way to make models learn to translate without any parallel corpora but with external knowledge. Current models in the field usually combine an image encoder with a textual decoder, which leads to extra efforts to make use of the original model in machine translation. On the other hand, Transformers have achieved great progress in neural language processing while they are rarely utilized in the field of zero-resource translation with image pivot. In this paper, we investigate how to use visual information as an auxiliary hint for a Transformer-based system in a zero-resource translation scenario. Our model achieves state-of-the-art BLEU scores in the field of zero-resource machine translation with image pivot.

Chairs:
Bhuvana Ramabhadran

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00