Raguvir Kunani

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2022-71

May 11, 2022

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-71.pdf

A predictive model's utility lies in its ability to generalize to data it has not seen. Unfortunately, it is difficult to reliably measure a model's ability to generalize to unseen data since it requires reasoning about the model's interactions with unknown environments. Generalization of deep learning models has been the subject of extensive study for years, but there has been a recent increase in the exploration of generalization metrics to predict the generalization of deep learning models.

While prior work in generalization metrics has been dominated by computer vision, in this work, we conduct one of the first analyses of generalization metrics in natural language processing (NLP). We study 36 generalization metrics spanning various motivations/theories with the goal of understanding the degree to which each metric is appropriate for use in predicting the generalization of models common in NLP. We particularly focus on shape metrics (generalization metrics derived from the shape of the empirical distribution of eigenvalues of weight correlation matrices) and are among the first to consider out-of-distribution generalization when evaluating the effectiveness generalization metrics.

We find that shape metrics are a promising category of generalization metrics, as they are the best metrics among those we consider at predicting generalization performance throughout training and show characteristics of being "ideal" generalization metrics. Interestingly, many of the generalization metrics we consider exhibit a behavior reminiscent of the Simpson's paradox when related to generalization performance. Moreover, the generalization metrics we consider are generally robust to changes in data distribution. However, there are signs this robustness is limited.

Advisors: Joseph Gonzalez


BibTeX citation:

@mastersthesis{Kunani:EECS-2022-71,
    Author= {Kunani, Raguvir},
    Title= {A Study of Generalization Metrics for Natural Language Processing: Correlational Analysis and a Simpson's Paradox},
    School= {EECS Department, University of California, Berkeley},
    Year= {2022},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-71.html},
    Number= {UCB/EECS-2022-71},
    Abstract= {A predictive model's utility lies in its ability to generalize to data it has not seen. Unfortunately, it is difficult to reliably measure a model's ability to generalize to unseen data since it requires reasoning about the model's interactions with unknown environments. Generalization of deep learning models has been the subject of extensive study for years, but there has been a recent increase in the exploration of generalization metrics to predict the generalization of deep learning models.

While prior work in generalization metrics has been dominated by computer vision, in this work, we conduct one of the first analyses of generalization metrics in natural language processing (NLP). We study 36 generalization metrics spanning various motivations/theories with the goal of understanding the degree to which each metric is appropriate for use in predicting the generalization of models common in NLP. We particularly focus on shape metrics (generalization metrics derived from the shape of the empirical distribution of eigenvalues of weight correlation matrices) and are among the first to consider out-of-distribution generalization when evaluating the effectiveness generalization metrics.

We find that shape metrics are a promising category of generalization metrics, as they are the best metrics among those we consider at predicting generalization performance throughout training and show characteristics of being "ideal" generalization metrics. Interestingly, many of the generalization metrics we consider exhibit a behavior reminiscent of the Simpson's paradox when related to generalization performance. Moreover, the generalization metrics we consider are generally robust to changes in data distribution. However, there are signs this robustness is limited.},
}

EndNote citation:

%0 Thesis
%A Kunani, Raguvir 
%T A Study of Generalization Metrics for Natural Language Processing: Correlational Analysis and a Simpson's Paradox
%I EECS Department, University of California, Berkeley
%D 2022
%8 May 11
%@ UCB/EECS-2022-71
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-71.html
%F Kunani:EECS-2022-71