scientific article; zbMATH DE number 7415078
From MaRDI portal
Antoine Dedieu, Rahul Mazumder, Hussein Hazimeh
Publication date: 27 October 2021
Full work available at URL: https://arxiv.org/abs/2001.06471
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Grouped variable selection with discrete optimization: computational and statistical perspectives, Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines, \(2 \times 2\)-convexifications for convex quadratic optimization with indicator variables, Sparse quantile regression, An automated exact solution framework towards solving the logistic regression best subset selection problem, Sparse classification: a scalable discrete optimization perspective, Sparse regression at scale: branch-and-bound rooted in first-order optimization, Ideal formulations for constrained convex optimization problems with indicator variables
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- Best subset selection via a modern optimization lens
- Feature subset selection for logistic regression via mixed integer optimization
- Gradient methods for minimizing composite functions
- Iterative hard thresholding methods for \(l_0\) regularized convex cone programming
- Supersparse linear integer models for optimized medical scoring systems
- Statistics for high-dimensional data. Methods, theory and applications.
- Mixed integer nonlinear programming. Selected papers based on the presentations at the IMA workshop mixed-integer nonlinear optimization: Algorithmic advances and applications, Minneapolis, MN, USA, November 17--21, 2008
- Iterative hard thresholding for compressed sensing
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Logistic regression: from art to science
- How well can we estimate a sparse vector?
- An extended Newton-type algorithm for \(\ell_2\)-regularized sparse logistic regression and its efficiency for classifying large-scale datasets
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Sparse regression: scalable algorithms and empirical performance
- Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Coordinate descent algorithms
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Random Coordinate Descent Methods for <inline-formula> <tex-math notation="TeX">$\ell_{0}$</tex-math></inline-formula> Regularized Convex Optimization
- Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Sparse Approximate Solutions to Linear Systems
- Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Book Reviews
- Greedy Sparsity-Constrained Optimization
- On the Convergence of Block Coordinate Descent Type Methods
- Mixed-integer nonlinear optimization
- Variable Selection for Support Vector Machines in Moderately High Dimensions
- The elements of statistical learning. Data mining, inference, and prediction
- Convergence of a block coordinate descent method for nondifferentiable minimization
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers