A second-order method for strongly convex _1-regularization problems
From MaRDI portal
(Redirected from Publication:263191)
A second-order method for strongly convex \(\ell 1\)-regularization problems
A second-order method for strongly convex \(\ell 1\)-regularization problems
Abstract: In this paper a robust second-order method is developed for the solution of strongly convex l1-regularized problems. The main aim is to make the proposed method as inexpensive as possible, while even difficult problems can be efficiently solved. The proposed approach is a primal-dual Newton Conjugate Gradients (pdNCG) method. Convergence properties of pdNCG are studied and worst-case iteration complexity is established. Numerical results are presented on synthetic sparse least-squares problems and real world machine learning problems.
Recommendations
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- The method of iterative second order regularization for convex problems of conditional minimization
- scientific article; zbMATH DE number 2208669
- Second-order optimality conditions and improved convergence results for regularization methods for cardinality-constrained optimization problems
- A second-order gradient method for convex minimization
- scientific article; zbMATH DE number 17725
- A second-order method for convex \(\ell_1\)-regularized optimization with active-set prediction
- A regularization smoothing method for second-order cone complementarity problem
- Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems
- scientific article; zbMATH DE number 653038
Cites work
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 781821 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 6253925 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Nonlinear Primal-Dual Method for Total Variation-Based Image Restoration
- A coordinate gradient descent method for nonsmooth separable minimization
- A mathematical view of interior-point methods in convex optimization
- A modified finite Newton method for fast solution of large scale linear SVMs
- Accelerated block-coordinate relaxation for regularized optimization
- Analysis of bounded variation penalty methods for ill-posed problems
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Coordinate descent algorithms for lasso penalized regression
- Coordinate descent method for large-scale L2-loss linear support vector machines
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Multiple View Geometry in Computer Vision
- NESTA: A fast and accurate first-order method for sparse recovery
- Parallel coordinate descent methods for big data optimization
- Templates for convex cone problems with applications to sparse signal recovery
Cited in
(27)- A robust computational framework for variational data assimilation of mean flows with sparse measurements corrupted by strong outliers
- Mathematical optimization in classification and regression trees
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- On sparse ensemble methods: an application to short-term predictions of the evolution of COVID-19
- A preconditioner for a primal-dual Newton conjugate gradient method for compressed sensing problems
- Two approaches for solving \(l_1\)-regularized least squares with application to truss topology design
- Linesearch Newton-CG methods for convex optimization with noise
- Optimal randomized classification trees
- Generalized conjugate gradient methods for \(\ell_1\) regularized convex quadratic programming with finite convergence
- A fast active set block coordinate descent algorithm for \(\ell_1\)-regularized least squares
- On the convergence rate of scaled gradient projection method
- Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization
- Visualizing data as objects by DC (difference of convex) optimization
- A flexible coordinate descent method
- Sparse approximations with interior point methods
- Visualizing proportions and dissimilarities by space-filling maps: a large neighborhood search approach
- Interior-point solver for convex separable block-angular problems
- Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems
- An active set Newton-CG method for \(\ell_1\) optimization
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- A multilevel method for self-concordant minimization
- An inexact dual logarithmic barrier method for solving sparse semidefinite programs
- Gradient-based method with active set strategy for \(\ell _1\) optimization
- On partial Cholesky factorization and a variant of quasi-Newton preconditioners for symmetric positive definite matrices
- scientific article; zbMATH DE number 3964749 (Why is no real title available?)
- Continuation methods for approximate large scale object sequencing
- IMRO: A proximal quasi-Newton method for solving \(\ell_1\)-regularized least squares problems
This page was built for publication: A second-order method for strongly convex \(\ell _1\)-regularization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q263191)