Mariel Werner

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-143

June 26, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-143.pdf

When multiple clients are collaboratively learning and training a shared model, incentives problems can arise. The clients may have different learning objectives and application domains, or they may be competitors whose participation in the learning system could reduce their competitive advantage. While collaborative learning is a powerful framework that leverages vast networks of compute and data to generate a better model for all, participants may defect from collaboration if their incentives are misaligned with the guarantees of the system. In this dissertation, I examine three areas where accounting for incentives is critical in designing an effective collaborative learning system. I. When clients in the system have heterogeneous data distributions and divergent learning tasks, full collaboration within the system can result in a global model which performs poorly for individual clients. Personalization of the global model to clusters of clients with similar learning objectives is a solution to this problem. We propose a personalization method which has optimal convergence guarantees and is provably robust to malicious attackers. II. Clients who are competitors may not want to participate in collaborative learning system if their contributions will benefit their competitors and disadvantage themselves. We design a collaborative learning scheme which guarantees that clients lose no utility by participating. Additionally, we show that even as clients focus on increasing their own revenues, their model qualities converge to the Nash bargaining solution, thus optimizing for joint surplus. III. Finally, privacy concerns are a major deterrent for joining collaborative learning systems. In the final chapter, we look at privacy dynamics in systems of learning agents more broadly. Specifically, we study a repeated-interaction game between potentially antagonistic learning agents -- a buyer and a price-discriminating seller -- and show that privacy-protecting behavior endogenously arises at equilibrium.

Advisors: Michael Jordan


BibTeX citation:

@phdthesis{Werner:EECS-2024-143,
    Author= {Werner, Mariel},
    Title= {Collaborative Learning: Aligning Goals and Outcomes},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {Jun},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-143.html},
    Number= {UCB/EECS-2024-143},
    Abstract= {When multiple clients are collaboratively learning and training a shared model, incentives problems can arise. The clients may have different learning objectives and application domains, or they may be competitors whose participation in the learning system could reduce their competitive advantage. While collaborative learning is a powerful framework that leverages vast networks of compute and data to generate a better model for all, participants may defect from collaboration if their incentives are misaligned with the guarantees of the system. In this dissertation, I examine three areas where accounting for incentives is critical in designing an effective collaborative learning system. I. When clients in the system have heterogeneous data distributions and divergent learning tasks, full collaboration within the system can result in a global model which performs poorly for individual clients. Personalization of the global model to clusters of clients with similar learning objectives is a solution to this problem. We propose a personalization method which has optimal convergence guarantees and is provably robust to malicious attackers. II. Clients who are competitors may not want to participate in collaborative learning system if their contributions will benefit their competitors and disadvantage themselves. We design a collaborative learning scheme which guarantees that clients lose no utility by participating. Additionally, we show that even as clients focus on increasing their own revenues, their model qualities converge to the Nash bargaining solution, thus optimizing for joint surplus. III. Finally, privacy concerns are a major deterrent for joining collaborative learning systems. In the final chapter, we look at privacy dynamics in systems of learning agents more broadly. Specifically, we study a repeated-interaction game between potentially antagonistic learning agents -- a buyer and a price-discriminating seller -- and show that privacy-protecting behavior endogenously arises at equilibrium.},
}

EndNote citation:

%0 Thesis
%A Werner, Mariel 
%T Collaborative Learning: Aligning Goals and Outcomes
%I EECS Department, University of California, Berkeley
%D 2024
%8 June 26
%@ UCB/EECS-2024-143
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-143.html
%F Werner:EECS-2024-143