### Sven Koenig

###
EECS Department

University of California, Berkeley

Technical Report No. UCB/CSD-92-685

May 1992

### http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/CSD-92-685.pdf

In this report, we describe a simple probabilistic and decision-theoretic planning problem. We show how algorithms developed in the field of Markovian decision theory, a subfield of stochastic dynamic programming (operations research), can be used to construct optimal plans for this planning problem, and we present some of the complexity results known. Although their computational complexity allows only small problems to be solved optimally, the methods presented here are helpful as a theoretical framework. They allow one to make statements about the structure of an optimal plan, to guide the development of heuristic planning methods, and to evaluate their performance. We show the connection between this normative theory and universal planning, reinforcement learning, and anytime algorithms.

One can easily construct a one-step planner by using a Markovian decision algorithm and a random assignment of actions to states as the initial plan. In many planning domains, it is easy for human problem solvers to construct a working plan, although it is difficult for them to find the optimal (or a close-to-optimal) plan. Therefore, we propose a two-step planning method: During the first planning phase, a working (i.e. ergodic), but not necessarily optimal plan is constructed. Sometimes, a domain specific planning method might be available for this task. We show that such a planning method can be obtained even in probabilistic domains by taking advantage of deterministic or quasi-deterministic actions. Thus, traditional (deterministic) planners can be useful in probabilistic domains. We also state a general greedy algorithm that accomplishes this task if no domain specific method is available. During the second planning phase, a Markovian decision algorithm is used to incrementally refine the initial plan and derive increasingly better plans, until the optimal plan is finally found. Since this algorithm is an anytime algorithm, we can trade off planning time and the execution reward of the plan.

Finally, we briefly present a software package that implements our ideas. The probabilistic domain can be modeled using an augmented STRIPS notation. It is automatically translated into a Markovian decision problem, that is then solved.

BibTeX citation:

@techreport{Koenig:CSD-92-685, Author = {Koenig, Sven}, Title = {Optimal Probabilistic and Decision-Theoretic Planning using Markovian Decision Theory}, Institution = {EECS Department, University of California, Berkeley}, Year = {1992}, Month = {May}, URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/6244.html}, Number = {UCB/CSD-92-685}, Abstract = {In this report, we describe a simple probabilistic and decision-theoretic planning problem. We show how algorithms developed in the field of Markovian decision theory, a subfield of stochastic dynamic programming (operations research), can be used to construct optimal plans for this planning problem, and we present some of the complexity results known. Although their computational complexity allows only small problems to be solved optimally, the methods presented here are helpful as a theoretical framework. They allow one to make statements about the structure of an optimal plan, to guide the development of heuristic planning methods, and to evaluate their performance. We show the connection between this normative theory and universal planning, reinforcement learning, and anytime algorithms. <p>One can easily construct a one-step planner by using a Markovian decision algorithm and a random assignment of actions to states as the initial plan. In many planning domains, it is easy for human problem solvers to construct a working plan, although it is difficult for them to find the optimal (or a close-to-optimal) plan. Therefore, we propose a two-step planning method: During the first planning phase, a working (i.e. ergodic), but not necessarily optimal plan is constructed. Sometimes, a domain specific planning method might be available for this task. We show that such a planning method can be obtained even in probabilistic domains by taking advantage of deterministic or quasi-deterministic actions. Thus, traditional (deterministic) planners can be useful in probabilistic domains. We also state a general greedy algorithm that accomplishes this task if no domain specific method is available. During the second planning phase, a Markovian decision algorithm is used to incrementally refine the initial plan and derive increasingly better plans, until the optimal plan is finally found. Since this algorithm is an anytime algorithm, we can trade off planning time and the execution reward of the plan. <p>Finally, we briefly present a software package that implements our ideas. The probabilistic domain can be modeled using an augmented STRIPS notation. It is automatically translated into a Markovian decision problem, that is then solved.} }

EndNote citation:

%0 Report %A Koenig, Sven %T Optimal Probabilistic and Decision-Theoretic Planning using Markovian Decision Theory %I EECS Department, University of California, Berkeley %D 1992 %@ UCB/CSD-92-685 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/1992/6244.html %F Koenig:CSD-92-685