Deep3DSketch: 3D modeling from Free-hand Sketches with View- and Structural-Aware Adversarial Training
Tianrun Chen (Zhejiang University); Chenglong Fu (Huzhou University); Lanyun Zhu (Singapore University of Technology and Design); Mao Papa (Moxin (Huzhou) Technology Co., LTD); Ying Zang (Huzhou University); Jia Zhang (Yangzhou Polytechnic College); Lingyun Sun (Zhejiang University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
This work aims to investigate the problem of 3D modeling using single free-hand sketches, which is one of the most natural ways we humans express ideas. Although sketch-based 3D modeling can drastically make the 3D modeling process more accessible, the sparsity and ambiguity of sketches bring significant challenges for creating high-fidelity 3D models that reflect the creators' ideas. In this work, we propose a view- and structural-aware deep learning approach, \textit{Deep3DSketch}, which tackles the ambiguity and fully uses sparse information of sketches, emphasizing the structural information. Specifically, we introduced random pose sampling on both 3D shapes and 2D silhouettes, and an adversarial training scheme with an effective progressive discriminator to facilitate learning of the shape structures. Extensive experiments demonstrated the effectiveness of our approach, which outperforms existing methods -- with state-of-the-art (SOTA) performance on both synthetic and real datasets.