LEARNING MONOCULAR MESH RECOVERY OF MULTIPLE BODY PARTS VIA SYNTHETICS
Yu Sun, Wenpeng Gao, Yili Fu, Tianyu Huang, Qian Bao, Wu Liu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:19
In this paper, we focus on simultaneously recovering the 3D mesh of multiple body parts from a single RGB image. One of the main challenges is that available datasets with full-body 3D annotations are limited, which results in the poor generalization ability of the existing learning-based methods. Existing optimization-based methods iteratively fit the 3D mesh to the 2d pose, which is very time-consuming. To address these limitations, we propose to integrate multiple 3D single-body-part datasets to create a highly diverse whole-body 3D motion space for learning from controllable synthetics. Compared with the learning-based approaches, the proposed method greatly alleviates the reliance on training data. Compared with the optimization-based approaches, the proposed method is a hundred times faster. Our proposed method also outperforms previous state-of-the-art methods on CMU Panoptic dataset.