Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:46
27 Oct 2020

A light field contains information in four dimensions, two spatial and two angular. Representing a light field by sampling it with a fixed number of pixels implies an inherent trade-off between angular resolution and spatial resolution -- one apparently fixed at the time of capture. To enable flexible trade-offs in the spatial and angular resolution after the fact, in this paper we apply techniques from super resolution in an integrated fashion. Our approach explores the similarity between light field super resolution (LFSR) and single image super resolution (SISR) and proposes a neural network framework that can carry out flexible super resolution tasks. We present concrete instances of the framework for center-view spatial LFSR, full-views spatial LFSR, and combined spatial and angular LFSR. Experiments on both synthetic and real-world data sets show the center-view and full-views approaches outperform the state-of-the-art spatial LFSR by over 1dB in PSNR and that the combined approach achieves comparable performance to state-of-the-art spatial LFSR algorithms.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00