Towards A Machine Capable of Learning And Discovering Everything
Hao Liu
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2024-54
May 7, 2024
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-54.pdf
Large generative models have led to amazing results and revolutionized artificial intelligence. In this dissertation, I will discuss my research on advancing the foundation of these models, centered around addressing the bottlenecks of learning from any existing data and challenges of discovering to go beyond existing knowledge. First, I will describe our efforts to remove context size limitations of the transformer architecture. Our modeling and training methodologies, including BlockwiseTransformer and RingAttention, allow for near-infinite context sizes while maintaining scalability. I will then discuss the applications of large contexts in learning world model and decision-making. This includes Large World Model, the world’s first AI with million tokens context for modeling text, image, and hour-long video at the same time. Next, I will introduce my research on discovering that allows AI to discover data and learn. I will discuss our work on learning skills in gameplay without human specifying domain knowledge, paving the road for learning beyond imitating existing data. Finally, I will envision the next generation of large generative models we should build, focusing on advances in efficient scaling, reasoning, and discovering in general domains.
Advisors: Pieter Abbeel
BibTeX citation:
@phdthesis{Liu:EECS-2024-54, Author= {Liu, Hao}, Title= {Towards A Machine Capable of Learning And Discovering Everything}, School= {EECS Department, University of California, Berkeley}, Year= {2024}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-54.html}, Number= {UCB/EECS-2024-54}, Abstract= {Large generative models have led to amazing results and revolutionized artificial intelligence. In this dissertation, I will discuss my research on advancing the foundation of these models, centered around addressing the bottlenecks of learning from any existing data and challenges of discovering to go beyond existing knowledge. First, I will describe our efforts to remove context size limitations of the transformer architecture. Our modeling and training methodologies, including BlockwiseTransformer and RingAttention, allow for near-infinite context sizes while maintaining scalability. I will then discuss the applications of large contexts in learning world model and decision-making. This includes Large World Model, the world’s first AI with million tokens context for modeling text, image, and hour-long video at the same time. Next, I will introduce my research on discovering that allows AI to discover data and learn. I will discuss our work on learning skills in gameplay without human specifying domain knowledge, paving the road for learning beyond imitating existing data. Finally, I will envision the next generation of large generative models we should build, focusing on advances in efficient scaling, reasoning, and discovering in general domains.}, }
EndNote citation:
%0 Thesis %A Liu, Hao %T Towards A Machine Capable of Learning And Discovering Everything %I EECS Department, University of California, Berkeley %D 2024 %8 May 7 %@ UCB/EECS-2024-54 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-54.html %F Liu:EECS-2024-54