An inexact successive quadratic approximation method for L-1 regularized optimization
From MaRDI portal
Abstract: We study a Newton-like method for the minimization of an objective function that is the sum of a smooth convex function and an l-1 regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model of the objective function. In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning.
Recommendations
- Inexact successive quadratic approximation for regularized optimization
- Semismooth Newton and quasi-Newton methods in weighted \(\ell^1\)-regularization
- An algorithm for quadratic \(\ell_1\)-regularized optimization with a flexible active-set strategy
- A reduced-space algorithm for minimizing \(\ell_1\)-regularized convex functions
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
Cites work
- scientific article; zbMATH DE number 3381785 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A comparison of optimization methods and software for large-scale L1-regularized linear classifi\-cation
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- A semismooth Newton method with multidimensional filter globalization for \(l_1\)-optimization
- An improved GLMNET for L1-regularized logistic regression
- An inexact interior point method for \(L_{1}\)-regularized sparse covariance selection
- Convergence of inexact Newton methods for generalized equations
- Cost Approximation: A Unified Framework of Descent Algorithms for Nonlinear Programs
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Inexact Newton Methods
- Inexact and accelerated proximal point algorithms
- Inexact coordinate descent: complexity and preconditioning
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- Nonlinear programming and variational inequality problems. A unified approach
- Numerical Optimization
- Representations of quasi-Newton matrices and their use in limited memory methods
- Sample size selection in optimization methods for machine learning
- Templates for convex cone problems with applications to sparse signal recovery
Cited in
(49)- Efficient proximal subproblem solvers for a nonsmooth trust-region method
- Inexact successive quadratic approximation for regularized optimization
- Descentwise inexact proximal algorithms for smooth optimization
- Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification
- An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems
- An efficient proximal block coordinate homotopy method for large-scale sparse least squares problems
- An inexact coordinate descent method for the weighted \(l_{1}\)-regularized convex optimization problem
- Second order semi-smooth proximal Newton methods in Hilbert spaces
- Inexact proximal stochastic second-order methods for nonconvex composite optimization
- A globally convergent proximal Newton-type method in nonsmooth convex optimization
- An inexact quasi-Newton algorithm for large-scale \(\ell_1\) optimization with box constraints
- A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
- An active set Newton-CG method for \(\ell_1\) optimization
- Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- An active-set proximal quasi-Newton algorithm for ℓ1-regularized minimization over a sphere constraint
- A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems
- An iterative reduction FISTA algorithm for large-scale LASSO
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- Optimization methods for large-scale machine learning
- Globalized inexact proximal Newton-type methods for nonconvex composite functions
- Local convergence analysis of an inexact trust-region method for nonsmooth optimization
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Semismooth Newton and quasi-Newton methods in weighted \(\ell^1\)-regularization
- Inexact proximal Newton methods for self-concordant functions
- Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems
- An active-set proximal-Newton algorithm for \(\ell_1\) regularized optimization problems with box constraints
- Inexact proximal Newton methods in Hilbert spaces
- A reduced-space algorithm for minimizing \(\ell_1\)-regularized convex functions
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- FarRSA for \(\ell_1\)-regularized convex optimization: local convergence and numerical experience
- Fused multiple graphical lasso
- IMRO: A proximal quasi-Newton method for solving \(\ell_1\)-regularized least squares problems
- Stochastic proximal gradient method FOR \(\ell_1\) regularized optimization over a sphere
- An inexact regularized proximal Newton method without line search
- Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
- Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems
- Concave Likelihood-Based Regression with Finite-Support Response Variables
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
- Adaptive quadratically regularized Newton method for Riemannian optimization
- A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property
- Inexact proximal DC Newton-type method for nonconvex composite functions
- Sub-sampled Newton methods
- A VMiPG method for composite optimization with nonsmooth term having no closed-form proximal mapping
- Efficient evaluation of scaled proximal operators
- A flexible coordinate descent method
- An inexact semismooth Newton method on Riemannian manifolds with application to duality-based total variation denoising
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Practical inexact proximal quasi-Newton method with global complexity analysis
This page was built for publication: An inexact successive quadratic approximation method for L-1 regularized optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q301652)