Best subset selection via a modern optimization lens
DOI10.1214/15-AOS1388zbMath1335.62115arXiv1507.03133OpenAlexW2963351303MaRDI QIDQ282479
Angela King, Rahul Mazumder, Dimitris J. Bertsimas
Publication date: 12 May 2016
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1507.03133
algorithmsglobal optimizationmixed integer programmingleast absolute deviationdiscrete optimizationlasso\(\ell_{0}\)-constrained minimizationbest subset selectionsparse linear regression
Ridge regression; shrinkage estimators (Lasso) (62J07) Nonparametric robustness (62G35) Linear regression; mixed models (62J05) Mixed integer programming (90C11) Nonconvex programming, global optimization (90C26) Combinatorial optimization (90C27)
Related Items (only showing first 100 items - show all)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- A unified approach to model selection and sparse recovery using regularized least squares
- Smooth minimization of non-smooth functions
- The Adaptive Lasso and Its Oracle Properties
- Best subset selection via a modern optimization lens
- Gradient methods for minimizing composite functions
- On constrained and regularized high-dimensional regression
- Least quantile regression via modern optimization
- Statistics for high-dimensional data. Methods, theory and applications.
- Iterative hard thresholding for compressed sensing
- Iterative thresholding for sparse approximations
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Near-ideal model selection by \(\ell _{1}\) minimization
- Algorithm for cardinality-constrained quadratic optimization
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- The restricted isometry property and its implications for compressed sensing
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Introductory lectures on convex optimization. A basic course.
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Computational study of a family of mixed-integer quadratic programming problems
- Graph puzzles, homotopy, and the alternating group
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- A brief history of linear and mixed-integer programming computation
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- Aggregation for Gaussian regression
- Pathwise coordinate optimization
- High-dimensional graphs and variable selection with the Lasso
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
- Certifying the Restricted Isometry Property is Hard
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Just relax: convex programming methods for identifying sparse signals in noise
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Atomic Decomposition by Basis Pursuit
- Ideal spatial adaptation by wavelet shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Uncertainty principles and ideal atomic decomposition
- A Statistical View of Some Chemometrics Regression Tools
- Sparse Approximate Solutions to Linear Systems
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Best subset selection via a modern optimization lens