On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications

From MaRDI portal
Revision as of 17:58, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2810547

DOI10.1137/140993090zbMath1338.65159OpenAlexW2398068802MaRDI QIDQ2810547

Chong Li, Yao-Hua Hu, Xiao Qi Yang

Publication date: 3 June 2016

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/f93105648b3ea56d337c196c94b3862c956bea27




Related Items (30)

Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problemsLinear regularity and linear convergence of projection-based methods for solving convex feasibility problemsConvergence analysis of new inertial method for the split common null point problemInterior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spacesIterative positive thresholding algorithm for non-negative sparse optimizationTwo iterative processes generated by regular vector fields in Banach spacesWeak Sharp Minima for Convex Infinite Optimization Problems in Normed Linear SpacesMultiple-sets split quasi-convex feasibility problems: Adaptive subgradient methods with convergence guaranteeA dynamical system method for solving the split convex feasibility problemOn the superlinear convergence of Newton's method on Riemannian manifoldsThe equivalence of three types of error bounds for weakly and approximately convex functionsDamped Newton's method on Riemannian manifoldsRiemannian linearized proximal algorithms for nonnegative inverse eigenvalue problemA successive centralized circumcentered-reflection method for the convex feasibility problemA modified inexact Levenberg-Marquardt method with the descent property for solving nonlinear equationsConvergence rate of the relaxed CQ algorithm under Hölderian type error bound propertyOptimality conditions for composite DC infinite programming problemsLinearized proximal algorithms with adaptive stepsizes for convex composite optimization with applicationsDescent methods with computational errors in Banach spacesModified inexact Levenberg-Marquardt methods for solving nonlinear least squares problemsIncremental quasi-subgradient methods for minimizing the sum of quasi-convex functionsLinear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problemsA family of projection gradient methods for solving the multiple-sets split feasibility problemQuantitative Analysis for Perturbed Abstract Inequality Systems in Banach SpacesAn adaptive fixed-point proximity algorithm for solving total variation denoising modelsUnnamed ItemStrong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s MethodAbstract convergence theorem for quasi-convex optimization problems with applicationsQuasi-convex feasibility problems: subgradient methods and convergence ratesConvergence rates of subgradient methods for quasi-convex optimization problems


Uses Software



Cites Work




This page was built for publication: On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications