[an error occurred while processing this directive] YooJung Choi [an error occurred while processing this directive] [an error occurred while processing this directive]
[an error occurred while processing this directive] YooJung Choi [an error occurred while processing this directive]
[an error occurred while processing this directive] YooJung Choi [an error occurred while processing this directive]
[an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive] PhD Candidate [an error occurred while processing this directive] University of California, Los Angeles [an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive]
  • Artificial Intelligence
  • [an error occurred while processing this directive] Probabilistic Reasoning for Robust and Fair Decision Making [an error occurred while processing this directive] Automated decision-making systems are increasingly being deployed in areas with personal and societal impacts: from personalized ads to medical diagnosis and criminal justice. Despite their significant impact, these systems are often used without much reasoning about their behaviors. For instance, while models are defined over a set of features, often they must make decisions with missing features as observing a feature often has a cost in real-world scenarios, such as medical diagnosis. Thus, different subsets of features may be observed for individuals, in which case one may wonder how robust the decision is against potential outcomes of unobserved features. In addition, automated systems can perpetuate social bias from historical data or even introduce new ones, making decisions that may put certain groups at an unfair disadvantage. Hence, both robustness and fairness are crucial in building trustworthy decision-making systems. Moreover, they both involve reasoning about the behavior of a model with uncertainty, i.e., with respect to an underlying distribution. My research seeks to develop probabilistic reasoning algorithms for quantifying and verifying robustness and fairness, and in turn, to use those to learn decision-making systems that can give guarantees. In particular, as these reasoning tasks are often computationally hard, I especially focus on tractable probabilistic models (TPMs) which are expressive models that allow for efficient inference. [an error occurred while processing this directive] YooJung Choi is a Ph.D. student in the Computer Science Department at UCLA, where she is a member of the Statistical and Relational Artificial Intelligence Lab advised by Professor Guy Van den Broeck. Her research focuses on probabilistic reasoning with tractable probabilistic models, especially with application in verifying and learning robust and fair decision making systems. [an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive]