Residual Swin Transformer Unet With Consistency Regularization For Automatic Breast Ultrasound Tumor Segmentation
Xianwei Zhuang, Xiner Zhu, Haoji Hu, Jincao Yao, Wei Li, Chen Yang, Liping Wang, Na Feng, Dong Xu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:13:07
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images whose patch distribution lies in the joint patch distribution of a small number of N>1 training samples. The method is, in essence, a hierarchical patch-GAN that applies quantization at the first coarse scale, in a similar fashion to VQ-GAN, followed by a pyramid of residual fully convolutional GANs at finer scales. Our key idea is to first use quantization to learn a fixed set of patch embeddings for training images. We then use a separate set of side images to model the structure of generated images using an autoregressive model trained on the learned patch embeddings of training images. Using quantization at the coarsest scale allows the model to generate both conditional and unconditional novel images. Subsequently, a patch-GAN renders the fine details, resulting in high-quality images. in an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.