Gregory Donnell Lawrence and Stuart J. Russell

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2006-58

May 15, 2006

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-58.pdf

A key step in many policy search algorithms is estimating the gradient of the objective function with respect to a given policy's parameters. The gradient is typically estimated from a set of policy trials. Each of these trials can be very expensive and so we prefer to minimize the total number of trials required to achieve a desired level of performance. In this paper we show that by viewing the task of estimating the gradient as a structured probabilistic inference problem, we can improve the learning performance. We argue that in many instances, reasoning about sensory data obtained during policy execution is beneficial. In other words, in addition to an agent knowing how well it performed during each policy run, it is helpful for it to learn ``how it feels'' to perform a particular task well. This knowledge is especially useful if we are able to incorporate prior knowledge specific to the given control task. Examples of using prior knowledge include setting the conditional independencies between various sensor variables and choosing the types of conditional probability distributions. In addition, by using hierarchical Bayes methods we are able to efficiently reuse old data from trials of other policies. We demonstrate the effectiveness of this approach by showing an improvement in learning performance on a toy cannon problem and a dart throwing task.


BibTeX citation:

@techreport{Lawrence:EECS-2006-58,
    Author= {Lawrence, Gregory Donnell and Russell, Stuart J.},
    Title= {Improving Gradient Estimation by Incorporating Sensor Data},
    Year= {2006},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-58.html},
    Number= {UCB/EECS-2006-58},
    Abstract= {A key step in many policy search algorithms is estimating the gradient of the objective function with respect to a given policy's parameters.  The gradient is typically estimated from a set of policy trials.  Each of these trials can be very expensive and so we prefer to minimize the total number of trials required to achieve a desired level of performance.  In this paper we show that by viewing the task of estimating the gradient as a structured probabilistic inference problem, we can improve the learning performance.  We argue that in many instances, reasoning about sensory data obtained during policy execution is beneficial.  In other words, in addition to an agent knowing how well it performed during each policy run, it is helpful for it to learn ``how it feels'' to perform a particular task well.  This knowledge is especially useful if we are able to incorporate prior knowledge specific to the given control task.  Examples of using prior knowledge include setting the conditional independencies between various sensor variables and choosing the types of conditional probability distributions.  In addition, by using hierarchical Bayes methods we are able to efficiently reuse old data from trials of other policies.  We demonstrate the effectiveness of this approach by showing an improvement in learning performance on a toy cannon problem and a dart throwing task.},
}

EndNote citation:

%0 Report
%A Lawrence, Gregory Donnell 
%A Russell, Stuart J. 
%T Improving Gradient Estimation by Incorporating Sensor Data
%I EECS Department, University of California, Berkeley
%D 2006
%8 May 15
%@ UCB/EECS-2006-58
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-58.html
%F Lawrence:EECS-2006-58