Skip to main content

Dynamic Unilateral Dual Learning for Text to Image Synthesis

Zhiqiang Zhang, Jiayao Xu, Ryugo Morita, Wenxin Yu, Jinjia Zhou

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Lecture 10 Oct 2023

Dual learning trains two inverse processes tasks dually to further improve the selected tasks' performance. There are currently two training paradigms in dual learning. One is to directly utilize two existing models for training in a dual manner to improve the selected models' performance. However, it cannot effectively guarantee the improvement of the selected models. Another is that the networks of both parties are manually designed. Nevertheless, the network performance of both parties will be poor in the initial stage of training, which will easily lead to unsatisfactory training. Besides, most of dual learning researches can only be used for the conversion between the same data types, but it is powerless for the conversion of different data types. To address the above issues, a paradigm called unilateral dual learning (UDL) is proposed and verified in the text-to-image (T2I) synthesis field. UDL allows one party to design the network manually, and the other party calls the pre-trained model to promote the training of the manually designed network to achieve satisfactory training. Experimental results on the Oxford-102 flower and Caltech-UCSD Birds datasets demonstrate the feasibility of our proposed UDL paradigm in the T2I field, and it achieves excellent performance qualitatively and quantitatively.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00