MULTI-POSE VIRTUAL TRY-ON VIA SELF-ADAPTIVE FEATURE FILTERING
Chenghu Du, Feng Yu, Minghua Jiang, Xiong Wei, Tao Peng, Xinrong Hu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:19
With the growing trend of virtual try-on, multi-pose tasks attract researchers due to their higher commercial value. Prior methods lack an effective geometric deformation to maintain the original image details resulting in many details loss in the head and garment. To address this problem, we propose a new multi-pose virtual try-on network, which can fit a garment to the corresponding area of a person in arbitrary poses. First, the target pose's body-semantic distribution is predicted by the target pose point. Second, the in-shop garment and human body are warped based on a human pose to solve the unnatural alignment and the lack of body details by the Deformation Module (DM). Finally, the human body in the given pose and garment is fine generated by the Filtering Synthesis Network (FSN). Compared to state-of-the-art methods with objective experiments on the MPV dataset, the proposed method achieves the best performance in metrics and the rich details in visual results.