Prediction, Allocation, and Alignment: Individual Preferences and Group Objectives
Ali Shirali
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2025-220
December 19, 2025
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-220.pdf
Many algorithmic systems rely on information about people’s preferences and needs, from personalization to policy design. A common assumption is that more individual-level data will yield better outcomes. We challenge this by asking how much we need to learn about individuals, and in what ways, to improve a group objective.
In the first part, we study settings where individuals bear the cost of communicating their preferences and steering the system. The challenge here is not limited data, but the difficulty of expressing what matters to users. We show that overly simplistic preference models cause alignment methods to fail when preferences deviate from their assumptions: they aggregate heterogeneous preferences in undesirable ways and misinterpret revealed preferences from individuals with inconsistent preferences. This motivates additional channels for preference expression, such as annotator information during training or costly signaling during inference.
In the second part, we study settings where a planner bears the cost of learning about individuals to serve a group objective. In applications such as resource allocation, we show that the value of individual-level learning is highly context-dependent: inequality, budgets, and existing mechanisms determine when individual prediction helps, when aggregate information suffices, and when waiting for more accurate predictions can worsen outcomes.
Taken together, we highlight that the value of additional individual-level information depends on context. Improving group objectives requires identifying which aspects of individual preferences matter and when learning them meaningfully improves decision-making.
Advisors: Moritz Hardt and Rediet Abebe
BibTeX citation:
@phdthesis{Shirali:EECS-2025-220,
Author= {Shirali, Ali},
Title= {Prediction, Allocation, and Alignment: Individual Preferences and Group Objectives},
School= {EECS Department, University of California, Berkeley},
Year= {2025},
Month= {Dec},
Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-220.html},
Number= {UCB/EECS-2025-220},
Abstract= {Many algorithmic systems rely on information about people’s preferences and needs, from personalization to policy design. A common assumption is that more individual-level data will yield better outcomes. We challenge this by asking how much we need to learn about individuals, and in what ways, to improve a group objective.
In the first part, we study settings where individuals bear the cost of communicating their preferences and steering the system. The challenge here is not limited data, but the difficulty of expressing what matters to users. We show that overly simplistic preference models cause alignment methods to fail when preferences deviate from their assumptions: they aggregate heterogeneous preferences in undesirable ways and misinterpret revealed preferences from individuals with inconsistent preferences. This motivates additional channels for preference expression, such as annotator information during training or costly signaling during inference.
In the second part, we study settings where a planner bears the cost of learning about individuals to serve a group objective. In applications such as resource allocation, we show that the value of individual-level learning is highly context-dependent: inequality, budgets, and existing mechanisms determine when individual prediction helps, when aggregate information suffices, and when waiting for more accurate predictions can worsen outcomes.
Taken together, we highlight that the value of additional individual-level information depends on context. Improving group objectives requires identifying which aspects of individual preferences matter and when learning them meaningfully improves decision-making.},
}
EndNote citation:
%0 Thesis %A Shirali, Ali %T Prediction, Allocation, and Alignment: Individual Preferences and Group Objectives %I EECS Department, University of California, Berkeley %D 2025 %8 December 19 %@ UCB/EECS-2025-220 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-220.html %F Shirali:EECS-2025-220