Ultra Real-Time Portrait Matting via Parallel Semantic Guidance
Xin Huang (University of Maryland, Baltimore County); Jiake Xie (PicUP.Ai); Bo Xu (OPPO Research Institute); Han Huang (OPPO Research Institute); Ziwen Li (OPPO Research Institute); Cheng Lu (XPENG); Yandong Guo (OPPO Research Institute); Yong Tang (PicUP.Ai)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Most existing portrait matting models either require expensive auxiliary information or try to decompose the task into sub-tasks that are usually resource-hungry. These challenges limit its application on low-power computing devices. In this paper, we propose an ultra-light-weighted portrait matting network via parallel semantic guidance (PSGNet ) for real-time portrait matting without any auxiliary inputs. PSGNet leverages parallel multi-level semantic information to efficiently guide the feature representations to replace traditional sequential semantic hints from objective decomposition. We also introduce an efficient fusion module to effectively combine parallel branches of PSGNet to minimize the representation redundancy. Comprehensive experiments demonstrate that our PSGNet can achieve remarkable performance on both synthetic and real-world images. Our PSGNet is capable to process at 100fps thanks to its ultra-small number of parameters, which makes it deployable on low-power computing devices without compromising on the performance of real-time portrait matting.