Generative Models for Image and Long Video Synthesis
Tim Brooks
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-100
May 11, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-100.pdf
In this thesis, I present essential ingredients for making image and video generative models useful for general visual content creation through three contributions. First, I will present research on long video generation. This work proposes a network architecture and training paradigm that enables learning long-term temporal patterns from videos, a key challenge to advancing video generation from short clips to longer-form coherent videos. Next, I will present research on generating images of scenes conditioned on human poses. This work showcases the ability of generative models to represent relationships between humans and their environments, and emphasizes the importance of learning from large and complex datasets of daily human activity. Lastly, I will present a method for teaching generative models to follow image editing instructions by combining the abilities of large language models and text-to-image models to create supervised training data. Following instructions is an important step that will allow generative models of visual data to become more helpful to people. Together these works advance the capabilities of generative models for synthesizing images and long videos.
Advisors: Alexei (Alyosha) Efros
BibTeX citation:
@phdthesis{Brooks:EECS-2023-100, Author= {Brooks, Tim}, Title= {Generative Models for Image and Long Video Synthesis}, School= {EECS Department, University of California, Berkeley}, Year= {2023}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-100.html}, Number= {UCB/EECS-2023-100}, Abstract= {In this thesis, I present essential ingredients for making image and video generative models useful for general visual content creation through three contributions. First, I will present research on long video generation. This work proposes a network architecture and training paradigm that enables learning long-term temporal patterns from videos, a key challenge to advancing video generation from short clips to longer-form coherent videos. Next, I will present research on generating images of scenes conditioned on human poses. This work showcases the ability of generative models to represent relationships between humans and their environments, and emphasizes the importance of learning from large and complex datasets of daily human activity. Lastly, I will present a method for teaching generative models to follow image editing instructions by combining the abilities of large language models and text-to-image models to create supervised training data. Following instructions is an important step that will allow generative models of visual data to become more helpful to people. Together these works advance the capabilities of generative models for synthesizing images and long videos.}, }
EndNote citation:
%0 Thesis %A Brooks, Tim %T Generative Models for Image and Long Video Synthesis %I EECS Department, University of California, Berkeley %D 2023 %8 May 11 %@ UCB/EECS-2023-100 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-100.html %F Brooks:EECS-2023-100