Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:55
04 May 2020

The rise of generative adversarial networks has boosted a vast interest in the field of fashion image-to-image translation. However, previous methods do not perform well in cross-category translation tasks, e.g., translating jeans to skirts in fashion images. The translated skirts are easier to lose the detail texture of the jeans, and the generated legs or arms often look unnatural. In this paper, we propose a novel approach, called DesignGAN, that utilizes the landmark guided attention and a similarity constraint mechanism to achieve fashion cross-category translation. Moreover, we can achieve texture editing on any customized input, which can even be used as an effective way to empower fashion designers. Experiments on fashion datasets verify that DesignGAN is superior to other image-to-image translation methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00