LoopNeRF: Exploring Temporal Compression for 3D Video Textures
Alexander Kristoffersen
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-118
May 11, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-118.pdf
Neural Radiance Fields (NeRFs) have been shown to be effective representations for static scenes, with the newest methods able to generate photo-realistic renderings from novel views from casual captures. Dynamic view synthesis (DVS) via NeRF representations, however, have not seen similar results, with many SOTA methods requiring complicated capture setups synonymous with multi-view, synchronous captures. In this paper, we explore a related problem: given a trained dynamic NeRF, can we distill the dynamicism of the scene into a more compact representation? To this end, we present LoopNeRF, a method to compress a dynamic NeRF trained on a long monocular video capture by significantly reducing the time-dependent parameters. The resulting model can then be used to generate infinitely looping dynamic renderings from novel views: 3D Video Textures.
Advisors: Joseph Gonzalez
BibTeX citation:
@mastersthesis{Kristoffersen:EECS-2023-118, Author= {Kristoffersen, Alexander}, Title= {LoopNeRF: Exploring Temporal Compression for 3D Video Textures}, School= {EECS Department, University of California, Berkeley}, Year= {2023}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-118.html}, Number= {UCB/EECS-2023-118}, Abstract= {Neural Radiance Fields (NeRFs) have been shown to be effective representations for static scenes, with the newest methods able to generate photo-realistic renderings from novel views from casual captures. Dynamic view synthesis (DVS) via NeRF representations, however, have not seen similar results, with many SOTA methods requiring complicated capture setups synonymous with multi-view, synchronous captures. In this paper, we explore a related problem: given a trained dynamic NeRF, can we distill the dynamicism of the scene into a more compact representation? To this end, we present LoopNeRF, a method to compress a dynamic NeRF trained on a long monocular video capture by significantly reducing the time-dependent parameters. The resulting model can then be used to generate infinitely looping dynamic renderings from novel views: 3D Video Textures.}, }
EndNote citation:
%0 Thesis %A Kristoffersen, Alexander %T LoopNeRF: Exploring Temporal Compression for 3D Video Textures %I EECS Department, University of California, Berkeley %D 2023 %8 May 11 %@ UCB/EECS-2023-118 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-118.html %F Kristoffersen:EECS-2023-118