Single-Shot View Synthesis using a Multiplexed Light Field Camera

Shamus Li

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2024-192
November 13, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-192.pdf

Recent advancements in imaging technologies have shifted from traditional 2D image capture to more sophisticated methods that aim to capture additional dimensions-spatial, temporal, etc.-of a given scene. We present an approach to single-shot view synthesis using a multiplexed light field camera, where sub-images are designed to overlap with each other to achieve higher spatial and temporal resolution compared to conventional light field imaging. We use a single capture from our optical system to achieve novel view synthesis.

Our system captures light fields through a lens array that intentionally overlaps views, enhancing both resolution and depth of field. This multiplexing approach is complemented by a calibration process that aligns virtual camera poses, facilitating accurate reconstruction without repeated pose estimation. We modify the forward model of Gaussian Splatting to implicitly represent and reconstruct the light field from the multiplexed measurements.

We present synthetic experimental results that demonstrate the efficacy of our system in generating wide-angle, photorealistic 3D reconstructions of small scenes both in simulation and the real world, and discuss extensions to a physical system. We achieve an optical field of view of more than 70 degrees, and are able to accurately reconstruct more than 120 degrees with a single shot. Our physical system achieves 1.9 rays/pixel of multiplexing, a 90\% increase in pixel information over a light field imaging system with no overlapping, and we demonstrate higher-quality reconstructions on synthetic scenes with up to 2.5 rays/pixel of multiplexing when compared to both traditional light field images as well as monocular Gaussian Splatting. Our method represents a potential step forward in the practical application of view synthesis, particularly in dynamic environments with few cameras.

Advisor: Laura Waller

\"Edit"; ?>


BibTeX citation:

@mastersthesis{Li:EECS-2024-192,
    Author = {Li, Shamus},
    Title = {Single-Shot View Synthesis using a Multiplexed Light Field Camera},
    School = {EECS Department, University of California, Berkeley},
    Year = {2024},
    Month = {Nov},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-192.html},
    Number = {UCB/EECS-2024-192},
    Abstract = {Recent advancements in imaging technologies have shifted from traditional 2D image capture to more sophisticated methods that aim to capture additional dimensions-spatial, temporal, etc.-of a given scene. We present an approach to single-shot view synthesis using a multiplexed light field camera, where sub-images are designed to overlap with each other to achieve higher spatial and temporal resolution compared to conventional light field imaging. We use a single capture from our optical system to achieve novel view synthesis.

Our system captures light fields through a lens array that intentionally overlaps views, enhancing both resolution and depth of field. This multiplexing approach is complemented by a calibration process that aligns virtual camera poses, facilitating accurate reconstruction without repeated pose estimation. We modify the forward model of Gaussian Splatting to implicitly represent and reconstruct the light field from the multiplexed measurements.

We present synthetic experimental results that demonstrate the efficacy of our system in generating wide-angle, photorealistic 3D reconstructions of small scenes both in simulation and the real world, and discuss extensions to a physical system. We achieve an optical field of view of more than 70 degrees, and are able to accurately reconstruct more than 120 degrees with a single shot. Our physical system achieves 1.9 rays/pixel of multiplexing, a 90\% increase in pixel information over a light field imaging system with no overlapping, and we demonstrate higher-quality reconstructions on synthetic scenes with up to 2.5 rays/pixel of multiplexing when compared to both traditional light field images as well as monocular Gaussian Splatting. Our method represents a potential step forward in the practical application of view synthesis, particularly in dynamic environments with few cameras.}
}

EndNote citation:

%0 Thesis
%A Li, Shamus
%T Single-Shot View Synthesis using a Multiplexed Light Field Camera
%I EECS Department, University of California, Berkeley
%D 2024
%8 November 13
%@ UCB/EECS-2024-192
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-192.html
%F Li:EECS-2024-192