Pipeline for Real-Time Video View Synthesis
Athanasios Lelis, Nicholas Vretos, Petros Daras
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:14
Immersive experiences of real captured scenes are still challenging due to the amount of data and computational complexity required to achieve real-time.
Indeed, Light Field (LF) videos are the required input to produce the sense of depth thanks to the motion parallax, but manipulating such multi-view content is cumbersome and there is a lack of end-to-end pipelines that effectively edit and render such content.
In this work, we propose a compact representation for LF videos in the form of an atlas. We provide the detailed pipeline to compute the atlas from captured LFs and we demonstrate its efficiency for real-time view synthesis with limited computer power requirements. Finally, we demonstrate how our atlases can be merged to produce real-time fusion of LF videos.
Indeed, Light Field (LF) videos are the required input to produce the sense of depth thanks to the motion parallax, but manipulating such multi-view content is cumbersome and there is a lack of end-to-end pipelines that effectively edit and render such content.
In this work, we propose a compact representation for LF videos in the form of an atlas. We provide the detailed pipeline to compute the atlas from captured LFs and we demonstrate its efficiency for real-time view synthesis with limited computer power requirements. Finally, we demonstrate how our atlases can be merged to produce real-time fusion of LF videos.