Rising Stars 2020:

Aditi Raghunathan

PhD Candidate

Stanford University


Areas of Interest

  • Artificial Intelligence

Poster

Surprises in Robust Machine Learning

Abstract

Standard machine learning produces models that are highly accurate on average but that degrade dramatically when the test distribution deviates from the training distribution. While one can train robust models, this often comes at the expense of standard accuracy (on the training distribution). We study this tradeoff in two settings: adversarial training to be robust to perturbations and upweighting minority groups to be robust to subpopulation shifts. We create simple examples which highlight generalization issues as a major source of this tradeoff. For adversarial examples, we show that even augmenting with correctly annotated data to promote robustness can produce less accurate models, but we develop a simple method, robust self-training, that mitigates this tradeoff using unlabeled data. For minority groups, we show that overparametrization of models can hurt accuracy on the minority groups, though it improves standard accuracy. These results suggest that the "more data" and "bigger models" strategy that works well for the standard setting where train and test distributions are close, need not work on out-of-domain settings.

Bio

Aditi Raghunathan is a fifth year PhD student at Stanford University advised by Percy Liang. She is interested in building robust ML systems that can be deployed in the wild. She is a recipient of the Open Philanthropy AI Fellowship and the Google PhD fellowship in Machine Learning.

Personal home page