LEARNING LIGHT FIELD SYNTHESIS WITH MULTI-PLANE IMAGES: SCENE ENCODING AS A RECURRENT SEGMENTATION TASK
Tomás Völker, Guillaume Boisson, Bertrand Chupeau
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:46
In this paper we address the problem of view synthesis from large baseline light fields, by turning a sparse set of input views into a Multi-plane Image (MPI). Because available datasets are scarce, we propose a lightweight network that does not require extensive training. Unlike latest approaches, our model does not learn to estimate RGB layers but only encodes the scene geometry within MPI alpha layers, which comes down to a segmentation task. A Learned Gradient Descent (LGD) framework is used to cascade the same convolutional network in a recurrent fashion in order to refine the volumetric representation obtained. Thanks to its low number of parameters, our model trains successfully on a small light field video dataset and provides visually appealing results. It also exhibits convenient generalization properties regarding both the number of input views, the number of depth planes in the MPI, and the number of refinement iterations.