Pratul Srinivasan

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-214

December 17, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-214.pdf

In this dissertation, we investigate the question of how 3D scenes should be represented, such that the representation can be effectively estimated from standard photographs and can then be used to synthesize images of the same scene from novel unobserved viewpoints. Recovering photorealistic scene representations from images has been a longstanding goal of computer vision and graphics, and has typically been addressed using representations from standard computer graphics pipelines, such as triangle meshes, which are not particularly amenable to end-to-end optimization for maximizing the fidelity of rendered images. Instead, we advocate for the use of scene representations that are specifically well-suited to being used in differentiable deep learning pipelines. We explore the efficacy of various representations for view synthesis tasks including synthesizing local views around a single input image, extrapolating views around a pair of nearby input images, and interpolating novel views from a set of unstructured images. We present scene representations that succeed at the aforementioned tasks, which share two common properties: they represent scenes as volumes and that they avoid the poor scaling properties of regularly-sampled voxel grids by using compressed or parameter-efficient volume representations.

Advisors: Ravi Ramamoorthi and Ren Ng


BibTeX citation:

@phdthesis{Srinivasan:EECS-2020-214,
    Author= {Srinivasan, Pratul},
    Title= {Scene Representations for View Synthesis with Deep Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Dec},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-214.html},
    Number= {UCB/EECS-2020-214},
    Abstract= {In this dissertation, we investigate the question of how 3D scenes should be represented, such that the representation can be effectively estimated from standard photographs and can then be used to synthesize images of the same scene from novel unobserved viewpoints. Recovering photorealistic scene representations from images has been a longstanding goal of computer vision and graphics, and has typically been addressed using representations from standard computer graphics pipelines, such as triangle meshes, which are not particularly amenable to end-to-end optimization for maximizing the fidelity of rendered images. Instead, we advocate for the use of scene representations that are specifically well-suited to being used in differentiable deep learning pipelines. We explore the efficacy of various representations for view synthesis tasks including synthesizing local views around a single input image, extrapolating views around a pair of nearby input images, and interpolating novel views from a set of unstructured images. We present scene representations that succeed at the aforementioned tasks, which share two common properties: they represent scenes as volumes and that they avoid the poor scaling properties of regularly-sampled voxel grids by using compressed or parameter-efficient volume representations.},
}

EndNote citation:

%0 Thesis
%A Srinivasan, Pratul 
%T Scene Representations for View Synthesis with Deep Learning
%I EECS Department, University of California, Berkeley
%D 2020
%8 December 17
%@ UCB/EECS-2020-214
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-214.html
%F Srinivasan:EECS-2020-214