Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:16
09 Jun 2021

Labelling image-sentence is expensive and some unsupervised image captioning methods show promising results on caption generation. However, the generated captions are not very relevant to images due to the excessive dependence on the corpus. In order to overcome that drawback, we focus on the correspondence between image and sentence to construct an image caption with better mapping relation. In this paper, we present a novel triple sequence generative adversarial net including an image generator, a discriminator, and a sentence generator. The image generator is used to generate the image regions for words. Meanwhile, the sentence corpus guides the sentence generator based on the generated image regions. The discriminator judges the relevance between the words in the sentence and the generated image regions. In the experiments, we use a large number of unpaired images and sentences to train our model on the unsupervised and unpaired setting. The experimental results demonstrate that our method achieves significant improvements as compared to all baselines.

Chairs:
Mahnoosh Mehrabani

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00