A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications

Dorsa Sadigh, Eric Kim, Samuel Coogan, S. Shankar Sastry and Sanjit A. Seshia

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2014-166
September 20, 2014

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-166.pdf

We propose to synthesize a control policy for a Markov decision process (MDP) such that the resulting traces of the MDP satisfy a linear temporal logic (LTL) property. We construct a product MDP that incorporates a deterministic Rabin automaton generated from the desired LTL property. The reward function of the product MDP is defined from the acceptance condition of the Rabin automaton. This construction allows us to apply techniques from learning theory to the problem of synthesis for LTL specifications even when the transition probabilities are not known a priori. We prove that our method is guaranteed to find a controller that satisfies the LTL property with probability one if such a policy exists, and we suggest empirically with a case study in traffic control that our method produces reasonable control strategies even when the LTL property cannot be satisfied with probability one.


BibTeX citation:

@techreport{Sadigh:EECS-2014-166,
    Author = {Sadigh, Dorsa and Kim, Eric and Coogan, Samuel and Sastry, S. Shankar and Seshia, Sanjit A.},
    Title = {A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications},
    Institution = {EECS Department, University of California, Berkeley},
    Year = {2014},
    Month = {Sep},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-166.html},
    Number = {UCB/EECS-2014-166},
    Abstract = {We propose to synthesize a control policy for a Markov decision process (MDP) such that the resulting traces of the MDP satisfy a linear temporal logic (LTL) property. We construct a product MDP that incorporates a deterministic Rabin automaton generated from the desired LTL property. The reward function of the product MDP is defined from the acceptance condition of the Rabin automaton. This construction allows us to apply techniques from learning theory to the problem of synthesis for LTL specifications even when the transition probabilities are not known a priori. We prove that our method is guaranteed to find a controller that satisfies the LTL property with probability one if such a policy exists, and we suggest empirically with a case study in traffic control that our method produces reasonable control strategies even when the LTL property cannot be satisfied with probability one.}
}

EndNote citation:

%0 Report
%A Sadigh, Dorsa
%A Kim, Eric
%A Coogan, Samuel
%A Sastry, S. Shankar
%A Seshia, Sanjit A.
%T A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications
%I EECS Department, University of California, Berkeley
%D 2014
%8 September 20
%@ UCB/EECS-2014-166
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-166.html
%F Sadigh:EECS-2014-166