Ph.D. Dissertations - Sergey Levine

Deep Generative Models for Decision-Making and Control
Michael Janner [2023]

Infrastructure Support for Datacenter Applications
Michael Chang [2023]

Neural Software Abstractions
Michael Chang [2023]

Offline Data-Driven Optimization: Benchmarks, Algorithms and Applications
Xinyang Geng [2023]

Reinforcement Learning from Static Datasets: Algorithms, Analysis, and Applications
Aviral Kumar [2023]

Building Assistive Sensorimotor Interfaces through Human-in-the-Loop Machine Learning
Siddharth Reddy [2022]

Large-Scale Real-World Robotic Manipulation Using Diverse Data
Frederik Ebert [2022]

Scalable Robot Learning
Ashvin Nair [2022]

Towards Adaptive, Continual Embodied Agents
Kelvin Xu [2022]

Acquiring Motor Skills Through Motion Imitation and Reinforcement Learning
Xue Bin Peng [2021]

Adaptation Based Approaches to Distribution Shift Problems
Marvin Zhang [2021]

Building Reinforcement Learning Algorithms that Generalize: From Latent Dynamics Models to Meta-Learning
JD Co-Reyes [2021]

Goal-Directed Exploration and Skill Reuse
Vitchyr Pong [2021]

How to Train Your Robot: Techniques for Enabling Robotic Learning in the Real World
Abhishek Gupta [2021]

Offline Learning for Scalable Decision Making
Justin Fu [2021]

Real World Robot Learning: Learned Rewards, Offline Datasets and Skill Re-Use
Avi Singh [2021]

Compositionality and Modularity for Robot Learning
Coline Devin [2020]

Learning and Analyzing Representations for Meta-Learning and Control
Kate Rakelly [2020]

Mobile Robot Learning
Gregory Kahn [2020]

Model-based Deep Reinforcement Learning for Robotic Systems
Anusha Nagabandi [2020]

Visual Dynamics Models for Robotic Planning and Control
Alex Lee [2019]

Acquiring Diverse Robot Skills via Maximum Entropy Deep Reinforcement Learning
Tuomas Haarnoja [2018]

Learning to Learn with Gradients
Chelsea Finn [2018]