A second-order method for convex _1-regularized optimization with active-set prediction
DOI10.1080/10556788.2016.1138222zbMATH Open1341.49039arXiv1505.04315OpenAlexW279731301MaRDI QIDQ2815550FDOQ2815550
Authors: Nitish Shirish Keskar, Jorge Nocedal, Figen Oztoprak, Andreas Wächter
Publication date: 29 June 2016
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1505.04315
Recommendations
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- Gradient-based method with active set strategy for \(\ell _1\) optimization
- An algorithm for quadratic \(\ell_1\)-regularized optimization with a flexible active-set strategy
- A reduced-space algorithm for minimizing \(\ell_1\)-regularized convex functions
- On the convergence of an active-set method for \(\ell_1\) minimization
active-set prediction\(\ell_1\)-minimizationsecond-order methodactive-set correctionsubspace-optimization
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation
- Numerical Optimization
- Sample size selection in optimization methods for machine learning
- A coordinate gradient descent method for nonsmooth separable minimization
- De-noising by soft-thresholding
- Sparse Reconstruction by Separable Approximation
- Representations of quasi-Newton matrices and their use in limited memory methods
- Optimization with sparsity-inducing penalties
- On the convergence of an active-set method for \(\ell_1\) minimization
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Matrix-free interior point method for compressed sensing problems
- A feasible active set method for strictly convex quadratic problems with simple bounds
- Proximal Newton-type methods for minimizing composite functions
Cited In (22)
- A second-order method for strongly convex \(\ell _1\)-regularization problems
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems
- Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization
- An efficient proximal block coordinate homotopy method for large-scale sparse least squares problems
- A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization
- An active-set proximal quasi-Newton algorithm for ℓ1-regularized minimization over a sphere constraint
- On the convergence of an active-set method for \(\ell_1\) minimization
- A dimension reduction technique for large-scale structured sparse optimization problems with application to convex clustering
- An inexact quasi-Newton algorithm for large-scale \(\ell_1\) optimization with box constraints
- Identifying active manifolds in regularization problems
- An active set Newton-CG method for \(\ell_1\) optimization
- An algorithm for quadratic \(\ell_1\)-regularized optimization with a flexible active-set strategy
- A decomposition method for Lasso problems with zero-sum constraint
- Gradient-based method with active set strategy for \(\ell _1\) optimization
- Minimization over the \(\ell_1\)-ball using an active-set non-monotone projected gradient
- A fast conjugate gradient algorithm with active set prediction for ℓ1 optimization
- A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer
- An active-set proximal-Newton algorithm for \(\ell_1\) regularized optimization problems with box constraints
- A reduced-space algorithm for minimizing \(\ell_1\)-regularized convex functions
- FarRSA for \(\ell_1\)-regularized convex optimization: local convergence and numerical experience
- A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers
This page was built for publication: A second-order method for convex \(\ell_1\)-regularized optimization with active-set prediction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2815550)