[an error occurred while processing this directive]
[an error occurred while processing this directive]
[an error occurred while processing this directive]
[an error occurred while processing this directive]
PhD Candidate
[an error occurred while processing this directive]
Princeton University
[an error occurred while processing this directive]
[an error occurred while processing this directive]
[an error occurred while processing this directive]
[an error occurred while processing this directive]
Artificial Intelligence
Machine Learning and Statistics
[an error occurred while processing this directive]
Finite mixture models do not reliably learn the number of components
[an error occurred while processing this directive]
Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. A common suggestion is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM component-count posterior is consistent; that is, the posterior concentrates on the true generating number of components. But existing results crucially depend on the assumption that the component likelihoods are perfectly specified. In practice, this assumption is unrealistic, and empirical evidence suggests that the FMM posterior on the number of components is sensitive to the likelihood choice. In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of latent components converges to 0 in the limit of infinite data. We illustrate practical consequences of our theory on simulated and real data sets.
[an error occurred while processing this directive]
Diana Cai is a Ph.D. candidate in computer science at Princeton University. Her research spans the areas of machine learning and statistics, and focuses on developing robust and scalable methods for probabilistic modeling and inference, with an emphasis on flexible, interpretable, and nonparametric machine learning methods. Previously, Diana obtained an A.B. in computer science and statistics from Harvard University, an M.S. in statistics from the University of Chicago, and an M.A. in computer science from Princeton University. Her research is supported in part by a Google PhD Fellowship in Machine Learning.
[an error occurred while processing this directive]
Personal home page
[an error occurred while processing this directive]
[an error occurred while processing this directive]