CS 285. Deep Reinforcement Learning, Decision Making, and Control

Catalog Description: Intersection of control, reinforcement learning, and deep learning. Deep learning methods, which train large parametric function approximators, achieve excellent results on problems that require reasoning about unstructured real-world situations (e.g., computer vision, speech recognition, NLP). Advanced treatment of the reinforcement learning formalism, the most critical model-free reinforcement learning algorithms (policy gradients, value function and Q-function learning, and actor-critic), a discussion of model-based reinforcement learning algorithms, an overview of imitation learning, and a range of advanced topics (e.g., exploration, model-based learning with video prediction, transfer learning, multi-task learning, and meta-learning).

Units: 3.0

Student Learning Outcomes: Provide students with foundational knowledge to understand deep reinforcement learning algorithms;, Provide an opportunity to embark on a research-level final project with support from course staff., Provide hands-on experience with several commonly used RL algorithms; , Provide students with an overview of advanced deep reinforcement learning topics, including current research trends;

Prerequisites: COMPSCI 189 or COMPSCI 289A or equivalent.

Formats:
Fall: 3.0 hours of lecture per week
Spring: 3.0 hours of lecture per week

Grading basis: letter

Final exam status: No final exam


Class Schedule (Fall 2019):
MoWe 10:00AM - 11:29AM, Soda 306 – Anusha Nagabandi, Avi Singh, Frederik Ebert, Kelvin Xu, Sergey Levine

Class homepage on inst.eecs

General Catalog listing