Variable selection using MM algorithms

From MaRDI portal
Publication:2583414

DOI10.1214/009053605000000200zbMATH Open1078.62028arXivmath/0508278OpenAlexW3101767848WikidataQ43138735 ScholiaQ43138735MaRDI QIDQ2583414FDOQ2583414


Authors: David R. Hunter, Runze Li Edit this on Wikidata


Publication date: 16 January 2006

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests.


Full work available at URL: https://arxiv.org/abs/math/0508278




Recommendations




Cites Work


Cited In (only showing first 100 items - show all)





This page was built for publication: Variable selection using MM algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583414)