Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 01:11:48
25 Apr 2024

In recent years, we have witnessed the impressive progress of cross-modality and cross-domain content generation based on generative learning approaches. In this webniar, we will introduce Image-to-image translation (I2I) technologies, which aims to transfer images from a source domain to a target domain while preserving the content representations. I2I has gained significant traction and achieved remarkable advancements in recent years due to its extensive applicability in various computer vision and image processing tasks. These tasks include but are not limited to image synthesis, segmentation, style transfer, restoration, and pose estimation. Firstly, we will begin with the introduction of well-established generative models, including the VAE model, GAN model, AR model, and Diffusion model. Next, we will provide a detailed summary of Image-to-image translation technologies. We will categorize the cross-domain image generation problem into two main sets of tasks, i.e., supervised cross-domain image generation tasks and unsupervised/self-supervised cross-domain image generation tasks. We will provide a detailed taxonomy of the cross-domain image generation based on the different ways of designing model architecture, model optimization and sources of information, such as few-shot image generation or multi-modal image generation. In closing, we will provide a concise overview of recent progress and upcoming research directions.