### Sumitra Ganesh

###
EECS Department

University of California, Berkeley

Technical Report No. UCB/EECS-2009-87

May 29, 2009

### http://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-87.pdf

In this thesis, we address the problem of analyzing goal-directed human actions using the optimal control framework to model these actions. In an optimal control framework, the goals of the action are specified as a cost function whose terms represent the different, often competing, objectives that need to be realized in the course of the action. The relative weight given to the different terms will determine how these objectives are traded off when the human sensorimotor system minimizes the cost function. The cost functions corresponding to different actions are the basic building blocks in our representation. We view the human motor system as a hybrid nonlinear system that switches between different cost functions in response to changing goals and preferences.

In the context of this model, we address two problems. The first problem is the estimation of the unknown weighting parameters of a cost function from a segmented and labeled data set for an action. We show that the estimation of these parameters can be cast as a least squares optimization problem and present results for arm motions such as reaching and punching using motion capture data collected from different subjects.

The second problem is that of action recognition in which a stream of data is segmented into different actions, where the set of actions to be identified is pre-determined. We show that the problem of action recognition is similar to that of mode estimation in a hybrid system and can be solved using a particle filter if a receding horizon formulation of the optimal controller is adopted. We use the proposed approach to recognize different reaching actions from the 3D hand trajectory of subjects.

**Advisor:** Ruzena Bajcsy

BibTeX citation:

@phdthesis{Ganesh:EECS-2009-87, Author = {Ganesh, Sumitra}, Title = {Analysis of Goal-directed Human Actions using Optimal Control Models}, School = {EECS Department, University of California, Berkeley}, Year = {2009}, Month = {May}, URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-87.html}, Number = {UCB/EECS-2009-87}, Abstract = {In this thesis, we address the problem of analyzing goal-directed human actions using the optimal control framework to model these actions. In an optimal control framework, the goals of the action are specified as a cost function whose terms represent the different, often competing, objectives that need to be realized in the course of the action. The relative weight given to the different terms will determine how these objectives are traded off when the human sensorimotor system minimizes the cost function. The cost functions corresponding to different actions are the basic building blocks in our representation. We view the human motor system as a hybrid nonlinear system that switches between different cost functions in response to changing goals and preferences. In the context of this model, we address two problems. The first problem is the estimation of the unknown weighting parameters of a cost function from a segmented and labeled data set for an action. We show that the estimation of these parameters can be cast as a least squares optimization problem and present results for arm motions such as reaching and punching using motion capture data collected from different subjects. The second problem is that of action recognition in which a stream of data is segmented into different actions, where the set of actions to be identified is pre-determined. We show that the problem of action recognition is similar to that of mode estimation in a hybrid system and can be solved using a particle filter if a receding horizon formulation of the optimal controller is adopted. We use the proposed approach to recognize different reaching actions from the 3D hand trajectory of subjects.} }

EndNote citation:

%0 Thesis %A Ganesh, Sumitra %T Analysis of Goal-directed Human Actions using Optimal Control Models %I EECS Department, University of California, Berkeley %D 2009 %8 May 29 %@ UCB/EECS-2009-87 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-87.html %F Ganesh:EECS-2009-87