FINE-GRAINED GARMENT PARSING: A BODY GENERATION APPROACH
Peng Zhang, Yuwei Zhang, Shan Huang, Zhi Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 07:57
Current human parsing methods segment an image into different semantic parts including background, body parts and garments. A major limitation of today’s human parsing methodologies is that they are not able to provide fine-grained garment segmentation (e.g., left and right sleeves), and it is mainly due to the lack of a dataset with such fine-grained semantic garment part labels. To tackle this, we propose a body generation approach for fine-grained garment parsing. In particular, we first use a body generation module based on image inpainting, to locate the fine-grained garment parts corresponding to where the generated body parts are, e.g., the left sleeve is assumed to be associated with the left arm; we then extract the garment parts from the original whole garment based on the positions above. In our experiments based on a public dataset focusing on top clothing images, our solution can effectively separate a top garment into a left sleeve, a right sleeve and front, as compared to state-of-the-art solutions that parse it as a whole.