Singularity, misspecification and the convergence rate of EM

From MaRDI portal
Publication:1996764




Abstract: A line of recent work has analyzed the behavior of the Expectation-Maximization (EM) algorithm in the well-specified setting, in which the population likelihood is locally strongly concave around its maximizing argument. Examples include suitably separated Gaussian mixture models and mixtures of linear regressions. We consider over-specified settings in which the number of fitted components is larger than the number of components in the true distribution. Such misspecified settings can lead to singularity in the Fisher information matrix, and moreover, the maximum likelihood estimator based on n i.i.d. samples in d dimensions can have a non-standard mathcalO((d/n)frac14) rate of convergence. Focusing on the simple setting of two-component mixtures fit to a d-dimensional Gaussian distribution, we study the behavior of the EM algorithm both when the mixture weights are different (unbalanced case), and are equal (balanced case). Our analysis reveals a sharp distinction between these two cases: in the former, the EM algorithm converges geometrically to a point at Euclidean distance of mathcalO((d/n)frac12) from the true parameter, whereas in the latter case, the convergence rate is exponentially slower, and the fixed point has a much lower mathcalO((d/n)frac14) accuracy. Analysis of this singular case requires the introduction of some novel techniques: in particular, we make use of a careful form of localization in the associated empirical process, and develop a recursive argument to progressively sharpen the statistical rate.



Cites work



Describes a project that uses

Uses Software





This page was built for publication: Singularity, misspecification and the convergence rate of EM

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1996764)