Provable and Efficient Algorithms for Federated, Batch and Reinforcement Learning

Avishek Ghosh, Kannan Ramchandran and Aditya Guntuboyina

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2021-89
May 14, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-89.pdf

We propose and analyze iterative algorithms that are computationally efficient, statistically sound and adaptive (in some settings). We consider three different frameworks in which data is presented to the learner. First, we consider the Federated (Distributed) Learning (FL) setup, where data is only available at the edge, and a center machine learns various models via iteratively interacting with the edge nodes. Second, we study the canonical setting of supervised batch learning, where all the data and label pairs are available to the learner at the beginning. Third, we examine the framework of online learning, where data is presented in a streaming fashion. In particular, we focus on specific settings like Bandit and Reinforcement Learning (RL).

In the Federated Learning (FL) framework, we address the canonical problems of device heterogeneity, communication bottleneck and adversarial robustness for large scale high dimensional problems. We propose efficient and provable first and second order algorithms, and use ideas like quantization of information and apply several robust aggregation schemes to address the above-mentioned problems, while retaining the optimal statistical rates simultaneously. For the (supervised) batch learning framework, we use an efficient and statistically sound algorithm, namely Alternating Minimization (AM) and address the problem of max-affine regression; a non convex problem that generalizes the classical phase retrieval and closely resembles convex regression. We give convergence guarantees of AM, with near optimal statistical rate. Finally, in the online learning setup, we address the problem of adaptation (model selection) for contextual bandits (linear and beyond) and later extend these techniques to Reinforcement Learning (RL). Our algorithms here are efficient, provable and more importantly adaptive to the problem complexity.

Advisor: Kannan Ramchandran and Aditya Guntuboyina


BibTeX citation:

@phdthesis{Ghosh:EECS-2021-89,
    Author = {Ghosh, Avishek and Ramchandran, Kannan and Guntuboyina, Aditya},
    Title = {Provable and Efficient Algorithms for Federated, Batch and Reinforcement Learning},
    School = {EECS Department, University of California, Berkeley},
    Year = {2021},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-89.html},
    Number = {UCB/EECS-2021-89},
    Abstract = {We propose and analyze iterative algorithms that are computationally efficient, statistically sound and adaptive (in some settings). We consider three different frameworks in which data is presented to the learner. First, we consider the Federated (Distributed) Learning (FL) setup, where data is only available at the edge, and a center machine learns various models via iteratively interacting with the edge nodes. Second, we study the canonical setting of supervised batch learning, where all the data and label pairs are available to the learner at the beginning. Third, we examine the framework of online learning, where data is presented in a streaming fashion. In particular, we focus on specific settings like Bandit and Reinforcement Learning (RL). 


In the Federated Learning (FL) framework, we address the canonical problems of device heterogeneity, communication bottleneck and adversarial robustness for large scale high dimensional problems. We propose efficient and provable first and second order algorithms, and use ideas like quantization of information and apply several robust aggregation schemes to address the above-mentioned problems, while retaining the optimal statistical rates simultaneously. For the (supervised) batch learning framework, we use an efficient and statistically sound algorithm, namely Alternating Minimization (AM) and address the problem of max-affine regression; a non convex problem that generalizes the classical phase retrieval and closely resembles convex regression. We give convergence guarantees of AM, with near optimal statistical rate. Finally, in the online learning setup, we address the problem of adaptation (model selection) for contextual bandits (linear and beyond) and later extend these techniques to Reinforcement Learning (RL). Our algorithms here are efficient, provable and more importantly adaptive to the problem complexity.}
}

EndNote citation:

%0 Thesis
%A Ghosh, Avishek
%A Ramchandran, Kannan
%A Guntuboyina, Aditya
%T Provable and Efficient Algorithms for Federated, Batch and Reinforcement Learning
%I EECS Department, University of California, Berkeley
%D 2021
%8 May 14
%@ UCB/EECS-2021-89
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-89.html
%F Ghosh:EECS-2021-89