Scalable Algorithms for the Sparse Ridge Regression
From MaRDI portal
Publication:5148400
DOI10.1137/19M1245414zbMath1458.90489arXiv1806.03756MaRDI QIDQ5148400
Publication date: 4 February 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.03756
Ridge regression; shrinkage estimators (Lasso) (62J07) Mixed integer programming (90C11) Stochastic programming (90C15)
Related Items (16)
The backbone method for ultra-high dimensional sparse machine learning ⋮ Grouped variable selection with discrete optimization: computational and statistical perspectives ⋮ HARFE: hard-ridge random feature expansion ⋮ A new perspective on low-rank optimization ⋮ A graph-based decomposition method for convex quadratic optimization with indicators ⋮ Comparing solution paths of sparse quadratic minimization with a Stieltjes matrix ⋮ Unnamed Item ⋮ Supermodularity and valid inequalities for quadratic optimization with indicators ⋮ Subset Selection and the Cone of Factor-Width-k Matrices ⋮ Discussion of ``Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons ⋮ A Unified Approach to Mixed-Integer Optimization Problems With Logical Constraints ⋮ Strong formulations for conic quadratic optimization with indicator variables ⋮ A Mixed-Integer Fractional Optimization Approach to Best Subset Selection ⋮ Outlier Detection in Time Series via Mixed-Integer Conic Quadratic Optimization ⋮ Sparse regression at scale: branch-and-bound rooted in first-order optimization ⋮ Ideal formulations for constrained convex optimization problems with indicator variables
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Best subset selection via a modern optimization lens
- Mixed integer second-order cone programming formulations for variable selection in linear regression
- A branch-and-cut decomposition algorithm for solving chance-constrained mathematical programs with finite support
- Nonanticipative duality, relaxations, and formulations for chance-constrained stochastic programs
- Statistics for high-dimensional data. Methods, theory and applications.
- User-friendly tail bounds for sums of random matrices
- Sparse regression using mixed norms
- Computing exact \(D\)-optimal designs by mixed integer second-order cone programming
- The restricted isometry property and its implications for compressed sensing
- SCAD-penalized regression in high-dimensional partially linear models
- Elastic-net regularization in learning theory
- Asymptotic optimality of the fast randomized versions of GCV and \(C_ L\) in ridge regression and regularization
- Computational study of a family of mixed-integer quadratic programming problems
- Convex programming for disjunctive convex optimization
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Sparse learning via Boolean relaxations
- Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Perspective cuts for a class of convex 0-1 mixed integer programs
- Lectures on Modern Convex Optimization
- Perspective Reformulation and Applications
- Chance-Constrained Binary Packing Problems
- Covering Linear Programming with Violations
- A Constrainedℓ1Minimization Approach to Sparse Precision Matrix Estimation
- Ridge Regression and James-Stein Estimation: Review and Comments
- Ridge Regression in Practice
- Sparse Approximate Solutions to Linear Systems
- cmenet: A New Method for Bi-Level Variable Selection of Conditional Main Effects
- Robust Wasserstein profile inference and applications to machine learning
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Regularization and Variable Selection Via the Elastic Net
- The Discrete Dantzig Selector: Estimating Sparse Linear Models via Mixed Integer Linear Optimization
- Convex Approximations of Chance Constrained Programs
- Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix
- A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations
This page was built for publication: Scalable Algorithms for the Sparse Ridge Regression