Imputing a Variational Inequality Function or a Convex Objective Function: a Robust Approach
Jerome Thai
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2017-3
January 17, 2017
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-3.pdf
To impute the function of a variational inequality and the objective of a convex optimization problem from observations of (nearly) optimal decisions, previous approaches constructed inverse programming methods based on solving a convex optimization problem [17, 7]. However, we show that, in addition to requiring complete observations, these approaches are not robust to measurement errors, while in many applications, the outputs of decision processes are noisy and only partially observable from, e.g., limitations in the sensing in- frastructure. To deal with noisy and missing data, we formulate our inverse problem as the minimization of a weighted sum of two objectives: 1) a duality gap or Karush-Kuhn-Tucker (KKT) residual, and 2) a distance from the observations robust to measurement errors. In addition, we show that our method encompasses previous ones by generating a sequence of Pareto optimal points (with respect to the two objectives) converging to an optimal solution of previous formulations. To compare duality gaps and KKT residuals, we also derive new sub-optimality results defined by KKT residuals. Finally, an implementation framework is proposed with applications to delay function inference on the road network of Los Angeles, and consumer utility estimation in oligopolies.
Advisors: Alexandre Bayen
BibTeX citation:
@mastersthesis{Thai:EECS-2017-3, Author= {Thai, Jerome}, Title= {Imputing a Variational Inequality Function or a Convex Objective Function: a Robust Approach}, School= {EECS Department, University of California, Berkeley}, Year= {2017}, Month= {Jan}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-3.html}, Number= {UCB/EECS-2017-3}, Abstract= {To impute the function of a variational inequality and the objective of a convex optimization problem from observations of (nearly) optimal decisions, previous approaches constructed inverse programming methods based on solving a convex optimization problem [17, 7]. However, we show that, in addition to requiring complete observations, these approaches are not robust to measurement errors, while in many applications, the outputs of decision processes are noisy and only partially observable from, e.g., limitations in the sensing in- frastructure. To deal with noisy and missing data, we formulate our inverse problem as the minimization of a weighted sum of two objectives: 1) a duality gap or Karush-Kuhn-Tucker (KKT) residual, and 2) a distance from the observations robust to measurement errors. In addition, we show that our method encompasses previous ones by generating a sequence of Pareto optimal points (with respect to the two objectives) converging to an optimal solution of previous formulations. To compare duality gaps and KKT residuals, we also derive new sub-optimality results defined by KKT residuals. Finally, an implementation framework is proposed with applications to delay function inference on the road network of Los Angeles, and consumer utility estimation in oligopolies.}, }
EndNote citation:
%0 Thesis %A Thai, Jerome %T Imputing a Variational Inequality Function or a Convex Objective Function: a Robust Approach %I EECS Department, University of California, Berkeley %D 2017 %8 January 17 %@ UCB/EECS-2017-3 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-3.html %F Thai:EECS-2017-3