Approximation accuracy, gradient methods, and error bound for structured convex optimization

From MaRDI portal
Publication:607498

DOI10.1007/s10107-010-0394-2zbMath1207.65084OpenAlexW2101868363MaRDI QIDQ607498

Paul Tseng

Publication date: 22 November 2010

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-010-0394-2



Related Items

A proximal gradient splitting method for solving convex vector optimization problems, Further properties of the forward-backward envelope with applications to difference-of-convex programming, An Accelerated Level-Set Method for Inverse Scattering Problems, Optimized first-order methods for smooth convex minimization, New results on subgradient methods for strongly convex optimization problems with a unified analysis, A simplified view of first order methods for optimization, Augmented Lagrangian methods for convex matrix optimization problems, When only global optimization matters, Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization, On globally Q-linear convergence of a splitting method for group Lasso, Unnamed Item, A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure, Convex optimization approach to signals with fast varying instantaneous frequency, A unified approach to error bounds for structured convex optimization problems, A unified penalized method for sparse additive quantile models: an RKHS approach, Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant, An alternating direction method for finding Dantzig selectors, Computing proximal points of convex functions with inexact subgradients, A new convergence analysis and perturbation resilience of some accelerated proximal forward–backward algorithms with errors, Quadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral Functions, An extrapolated iteratively reweighted \(\ell_1\) method with complexity analysis, Proximal gradient methods for multiobjective optimization and their applications, Iteratively reweighted \(\ell _1\) algorithms with extrapolation, A simple convergence analysis of Bregman proximal gradient algorithm, Augmented Lagrangian method with alternating constraints for nonlinear optimization problems, Feature-aware regularization for sparse online learning, Smooth over-parameterized solvers for non-smooth structured optimization, Convergence rates of gradient methods for convex optimization in the space of measures, Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems, A singular value shrinkage thresholding algorithm for folded concave penalized low-rank matrix optimization problems, A globally convergent proximal Newton-type method in nonsmooth convex optimization, Convergence of an asynchronous block-coordinate forward-backward algorithm for convex composite optimization, The augmented Lagrangian method based on the APG strategy for an inverse damped gyroscopic eigenvalue problem, A modified proximal gradient method for a family of nonsmooth convex optimization problems, On the computational efficiency of subgradient methods: a case study with Lagrangian bounds, Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems, A very simple SQCQP method for a class of smooth convex constrained minimization problems with nice convergence results, On proximal gradient method for the convex problems regularized with the group reproducing kernel norm, Robust least square semidefinite programming with applications, Proximal methods for the latent group lasso penalty, On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems, Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization, On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization, A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property, Level-set methods for convex optimization, An alternating direction method of multipliers with the BFGS update for structured convex quadratic optimization, On the linear convergence of the alternating direction method of multipliers, Iteration complexity analysis of block coordinate descent methods, SOR- and Jacobi-type iterative methods for solving \(\ell_1 - \ell_2\) problems by way of Fenchel duality, On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems, Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems, On convex envelopes and regularization of non-convex functionals without moving global minima, A sequential partial linearization algorithm for the symmetric eigenvalue complementarity problem, Accelerated Residual Methods for the Iterative Solution of Systems of Equations, Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods, A dual reformulation and solution framework for regularized convex clustering problems, Bounds for the tracking error of first-order online optimization methods, Faster subgradient methods for functions with Hölderian growth, Iteration complexity analysis of dual first-order methods for conic convex programming, Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods, A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization, Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions, Iterative Proportional Scaling Revisited: A Modern Optimization Perspective, An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing, The modified second APG method for DC optimization problems, A family of subgradient-based methods for convex optimization problems in a unifying framework, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Adaptive FISTA for Nonconvex Optimization, Generalized Conditional Gradient for Sparse Estimation, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, A telescopic Bregmanian proximal gradient method without the global Lipschitz continuity assumption, Some modified fast iterative shrinkage thresholding algorithms with a new adaptive non-monotone stepsize strategy for nonsmooth and convex minimization problems, Unnamed Item, Iteration Complexity of a Block Coordinate Gradient Descent Method for Convex Optimization, A control-theoretic perspective on optimal high-order optimization, Generalized affine scaling algorithms for linear programming problems, On Degenerate Doubly Nonnegative Projection Problems, An inexact accelerated stochastic ADMM for separable convex optimization, An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis


Uses Software


Cites Work