Serena Lutong Wang

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2024-67

May 9, 2024

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-67.pdf

With the rise of machine learning (ML), society has become increasingly driven by metrics and algorithms. Unfortunately, even well-intended metrics often do not align with desired social outcomes. For example, in healthcare, mandated reporting of hospital mortality rate metrics actually led to worsened health outcomes for severely ill patients. Such misalignments present a fundamental challenge to understanding and improving the societal impacts of ML, where a growing literature relies on formulating broad notions such as fairness as metrics for either evaluation or optimization.

A core challenge driving the misalignments between metrics and social outcomes is the fact that ML systems are also multi-stakeholder systems. Some of the highest stakes deployments of ML also have many diverse stakeholders with asymmetric information, power, and values. For example, in healthcare, stakeholders include doctors, patients, hospitals, insurers, and many more. In these multi-stakeholder settings, misalignments between metrics and social outcomes challenge both policymakers seeking to audit ML systems, and engineers and researchers formulating ML problems.

To bridge the gaps between the technical formulations of ML and its societal impacts, this thesis addresses two complementary challenges. The first part concerns the implementation of socially relevant desiderata of fairness, robustness, and interpretability in ML, which become metrics in the form of objectives or constraints. Specifically, we will consider how to algorithmically build these notions into modern ML systems under noisy data and evolving large-scale training protocols. The second part will zoom out from these particular notions to consider the wider role of metrics in multi-stakeholder systems. This part brings in ideas from economics to achieve a better understanding of the interdependence between metrics and the surrounding ecosystem of stakeholders with asymmetric information, power, and values. In reimagining the ML development process with stakeholder involvement, we ask, “Who has information on how to improve metrics, and when would they share it?”

Advisors: Michael Jordan


BibTeX citation:

@phdthesis{Lutong Wang:EECS-2024-67,
    Author= {Lutong Wang, Serena},
    Title= {Bridging Gaps Between Metrics and Social Outcomes in Multi-Stakeholder Machine Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2024},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-67.html},
    Number= {UCB/EECS-2024-67},
    Abstract= {With the rise of machine learning (ML), society has become increasingly driven by metrics and algorithms. Unfortunately, even well-intended metrics often do not align with desired social outcomes. For example, in healthcare, mandated reporting of hospital mortality rate metrics actually led to worsened health outcomes for severely ill patients. Such misalignments present a fundamental challenge to understanding and improving the societal impacts of ML, where a growing literature relies on formulating broad notions such as fairness as metrics for either evaluation or optimization.

A core challenge driving the misalignments between metrics and social outcomes is the fact that ML systems are also multi-stakeholder systems. Some of the highest stakes deployments of ML also have many diverse stakeholders with asymmetric information, power, and values. For example, in healthcare, stakeholders include doctors, patients, hospitals, insurers, and many more. In these multi-stakeholder settings, misalignments between metrics and social outcomes challenge both policymakers seeking to audit ML systems, and engineers and researchers formulating ML problems.

To bridge the gaps between the technical formulations of ML and its societal impacts, this thesis addresses two complementary challenges. The first part concerns the implementation of socially relevant desiderata of fairness, robustness, and interpretability in ML, which become metrics in the form of objectives or constraints. Specifically, we will consider how to algorithmically build these notions into modern ML systems under noisy data and evolving large-scale training protocols. The second part will zoom out from these particular notions to consider the wider role of metrics in multi-stakeholder systems. This part brings in ideas from economics to achieve a better understanding of the interdependence between metrics and the surrounding ecosystem of stakeholders with asymmetric information, power, and values. In reimagining the ML development process with stakeholder involvement, we ask, “Who has information on how to improve metrics, and when would they share it?”},
}

EndNote citation:

%0 Thesis
%A Lutong Wang, Serena 
%T Bridging Gaps Between Metrics and Social Outcomes in Multi-Stakeholder Machine Learning
%I EECS Department, University of California, Berkeley
%D 2024
%8 May 9
%@ UCB/EECS-2024-67
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-67.html
%F Lutong Wang:EECS-2024-67