Support recovery without incoherence: a case for nonconvex regularization (Q682289)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Support recovery without incoherence: a case for nonconvex regularization |
scientific article |
Statements
Support recovery without incoherence: a case for nonconvex regularization (English)
0 references
14 February 2018
0 references
A new primal-dual witness proof framework is given that may be used to establish variable selection consistency and \(\ell_{+\infty}\)-bounds for sparse regression problems, even when the loss function and regularizes are nonconvex. The analysis in this paper applies to regularized \(M\)-estimators. From a statistical perspective, the purpose of solving the problem above is to estimate the vector that minimizes the expected loss. The estimator must be unique and independent of the sample size. Conditions are developed under which a minimizer of \(M\)-estimators is consistent with the estimator which minimizes the expected loss. In this paper, it is proved that for certain nonconvex regularizers with vanishing derivative away from the origin, any stationary point can be used to recover the support without requiring the typical incoherence conditions present in \(\ell_1\)-based methods. Numerical examples are given to justify the statement above.
0 references
\(M\)-estimator
0 references
Lasso
0 references
sparsity
0 references
nonconvex regularizer
0 references
high-dimensional statistics
0 references
variable selection
0 references