Eric Liang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2021-48

May 11, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-48.pdf

The past few years have seen the growth of deep reinforcement learning (RL) as a new and powerful optimization technique. Similar to deep supervised learning, deep RL has demonstrated the ability to solve problems previously thought to be unapproachable with machine learning (e.g., fine motor control of robotics, sports and video gaming), and substantial improvements over heuristic solutions for existing problems (e.g., in systems optimization, e-trading, advertising, robotic control). Like deep learning, deep reinforcement learning is necessarily computationally intensive. Because of this, researchers and practitioners in the field of deep RL frequently leverage parallel computation, which has led to a plethora of new algorithms and systems.

This thesis looks at deep RL from the systems perspective in two ways: how to design systems that scale the computationally demanding algorithms used by researchers and practitioners, and conversely, how to apply reinforcement learning to expand the state of the art in systems. We study the distributed primitives needed to support the emerging range of large-scale RL workloads in a flexible and high-performance way, as well as programming models that can enable RL researchers and practitioners to easily compose distributed RL algorithms without in-depth systems knowledge. We synthesize the lessons learned in RLlib, a widely adopted open source library for scalable reinforcement learning. We investigate the applications of RL and ML for improving systems, specifically the examples of improving the speed of network packet classifiers and database cardinality estimators.

Advisors: Ion Stoica


BibTeX citation:

@phdthesis{Liang:EECS-2021-48,
    Author= {Liang, Eric},
    Title= {Scalable Reinforcement Learning Systems and their Applications},
    School= {EECS Department, University of California, Berkeley},
    Year= {2021},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-48.html},
    Number= {UCB/EECS-2021-48},
    Abstract= {The past few years have seen the growth of deep reinforcement learning (RL) as a new and powerful optimization technique. Similar to deep supervised learning, deep RL has demonstrated the ability to solve problems previously thought to be unapproachable with machine learning (e.g., fine motor control of robotics, sports and video gaming), and substantial improvements over heuristic solutions for existing problems (e.g., in systems optimization, e-trading, advertising, robotic control). Like deep learning, deep reinforcement learning is necessarily computationally intensive. Because of this, researchers and practitioners in the field of deep RL frequently leverage parallel computation, which has led to a plethora of new algorithms and systems.

This thesis looks at deep RL from the systems perspective in two ways: how to design systems that scale the computationally demanding algorithms used by researchers and practitioners, and conversely, how to apply reinforcement learning to expand the state of the art in systems. We study the distributed primitives needed to support the emerging range of large-scale RL workloads in a flexible and high-performance way, as well as programming models that can enable RL researchers and practitioners to easily compose distributed RL algorithms without in-depth systems knowledge. We synthesize the lessons learned in RLlib, a widely adopted open source library for scalable reinforcement learning. We investigate the applications of RL and ML for improving systems, specifically the examples of improving the speed of network packet classifiers and database cardinality estimators.},
}

EndNote citation:

%0 Thesis
%A Liang, Eric 
%T Scalable Reinforcement Learning Systems and their Applications
%I EECS Department, University of California, Berkeley
%D 2021
%8 May 11
%@ UCB/EECS-2021-48
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-48.html
%F Liang:EECS-2021-48