Finite-time analysis of approximate policy iteration for the linear quadratic regulator
Karl Krauth and Stephen Tu and Benjamin Recht
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2021-7
March 9, 2021
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-7.pdf
We study the sample complexity of approximate policy iteration (PI) for the Linear Quadratic Regulator (LQR), building on a recent line of work using LQR as a testbed to understand the limits of reinforcement learning (RL) algorithms on continuous control tasks. Our analysis quantifies the tension between policy improvement and policy evaluation, and suggests that policy evaluation is the dominant factor in terms of sample complexity. Specifically, we show that to obtain a controller that is within ε of the optimal LQR controller, each step of policy evaluation requires at most (n+d)^3/ε^2 samples, where n is the dimension of the state vector and d is the dimension of the input vector. On the other hand, only log(1/ε) policy improvement steps suffice, resulting in an overall sample complexity of log(1/ε)(n+d)^3/ε^2. We furthermore build on our analysis and construct a simple adaptive procedure based on ε-greedy exploration which relies on approximate PI as a sub-routine and obtains T^(2/3) regret, improving upon a recent result of Abbasi-Yadkori et al.
Advisors: Michael Jordan and Jonathan Ragan-Kelley
BibTeX citation:
@mastersthesis{Krauth:EECS-2021-7, Author= {Krauth, Karl and Tu, Stephen and Recht, Benjamin}, Title= {Finite-time analysis of approximate policy iteration for the linear quadratic regulator}, School= {EECS Department, University of California, Berkeley}, Year= {2021}, Month= {Mar}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-7.html}, Number= {UCB/EECS-2021-7}, Abstract= {We study the sample complexity of approximate policy iteration (PI) for the Linear Quadratic Regulator (LQR), building on a recent line of work using LQR as a testbed to understand the limits of reinforcement learning (RL) algorithms on continuous control tasks. Our analysis quantifies the tension between policy improvement and policy evaluation, and suggests that policy evaluation is the dominant factor in terms of sample complexity. Specifically, we show that to obtain a controller that is within ε of the optimal LQR controller, each step of policy evaluation requires at most (n+d)^3/ε^2 samples, where n is the dimension of the state vector and d is the dimension of the input vector. On the other hand, only log(1/ε) policy improvement steps suffice, resulting in an overall sample complexity of log(1/ε)(n+d)^3/ε^2. We furthermore build on our analysis and construct a simple adaptive procedure based on ε-greedy exploration which relies on approximate PI as a sub-routine and obtains T^(2/3) regret, improving upon a recent result of Abbasi-Yadkori et al.}, }
EndNote citation:
%0 Thesis %A Krauth, Karl %A Tu, Stephen %A Recht, Benjamin %T Finite-time analysis of approximate policy iteration for the linear quadratic regulator %I EECS Department, University of California, Berkeley %D 2021 %8 March 9 %@ UCB/EECS-2021-7 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-7.html %F Krauth:EECS-2021-7