Ashwin Pananjady Martin

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2020-159

August 13, 2020

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-159.pdf

With a focus on designing flexible, tractable, and adaptive methodology for some canonical machine learning tasks, we establish several results for the class of permutation-based models, index models, and Markov reward processes. First, we study permutation-based models in the vector, matrix, and tensor settings, which provide robust representations in "broken-sample" problems and of human-generated data. We design tractable and adaptive methodological solutions for fitting these models that, among other things, narrow statistical-computational gaps conjectured in the literature. Second, we study a subclass of index models---widely used in dimensionality reduction and exploratory data analysis---through a computational lens, focusing on avoiding the (statistical) curse of dimensionality and on achieving automatic adaptation to the noise level in the problem. Our perspective yields efficient algorithms for solving these non-convex fitting problems that come with provable guarantees of sample efficiency and adaptation. Finally, we turn to studying some statistical questions in reinforcement learning, focusing in particular on instance-dependent guarantees for the policy evaluation problem. We show that while some algorithms attain the optimal, "local" performance for this problem, other popular methods fall short and must be modified in order to achieve the desired levels of adaptation.

Advisors: Martin Wainwright and Thomas Courtade


BibTeX citation:

@phdthesis{Pananjady Martin:EECS-2020-159,
    Author= {Pananjady Martin, Ashwin},
    Title= {Statistics, computation, and adaptation in high dimensions},
    School= {EECS Department, University of California, Berkeley},
    Year= {2020},
    Month= {Aug},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-159.html},
    Number= {UCB/EECS-2020-159},
    Abstract= {With a focus on designing flexible, tractable, and adaptive methodology for some canonical machine learning tasks, we establish several results for the class of permutation-based models, index models, and Markov reward processes. First, we study permutation-based models in the vector, matrix, and tensor settings, which provide robust representations in "broken-sample" problems and of human-generated data. We design tractable and adaptive methodological solutions for fitting these models that, among other things, narrow statistical-computational gaps conjectured in the literature. Second, we study a subclass of index models---widely used in dimensionality reduction and exploratory data analysis---through a computational lens, focusing on avoiding the (statistical) curse of dimensionality and on achieving automatic adaptation to the noise level in the problem. Our perspective yields efficient algorithms for solving these non-convex fitting problems that come with provable guarantees of sample efficiency and adaptation. Finally, we turn to studying some statistical questions in reinforcement learning, focusing in particular on instance-dependent guarantees for the policy evaluation problem. We show that while some algorithms attain the optimal, "local" performance for this problem, other popular methods fall short and must be modified in order to achieve the desired levels of adaptation.},
}

EndNote citation:

%0 Thesis
%A Pananjady Martin, Ashwin 
%T Statistics, computation, and adaptation in high dimensions
%I EECS Department, University of California, Berkeley
%D 2020
%8 August 13
%@ UCB/EECS-2020-159
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-159.html
%F Pananjady Martin:EECS-2020-159