Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings
DOI10.1007/s10107-015-0967-1zbMath1351.65035arXiv1510.06148OpenAlexW1791808123MaRDI QIDQ312694
Publication date: 16 September 2016
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1510.06148
convergencefixed pointHilbert spacequasi-nonexpansive mappingnetworked systemquasi-nonexpansive mappingsnonsmooth convex optimizationincremental subgradient methodparallel subgradient method
Programming involving graphs or networks (90C35) Numerical mathematical programming methods (65K05) Convex programming (90C25) Applications of mathematical programming (90C90) Contraction-type mappings, nonexpansive mappings, (A)-proper mappings, etc. (47H09)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed multi-agent optimization with state-dependent communication
- Random algorithms for convex minimization problems
- The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces
- Subgradient methods for saddle-point problems
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Error stability properties of generalized gradient-type algorithms
- Ill-posed problems with a priori information
- Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping
- Incremental Subgradient Methods for Nondifferentiable Optimization
- A Class of Randomized Primal-Dual Algorithms for Distributed Optimization
- PARALLEL OPTIMIZATION ALGORITHM FOR SMOOTH CONVEX OPTIMIZATION OVER FIXED POINT SETS OF QUASI-NONEXPANSIVE MAPPINGS
- Proximal Splitting Methods in Signal Processing
- Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings
- A proximal decomposition method for solving convex variational inverse problems
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
- A Use of Conjugate Gradient Direction for the Convex Optimization Problem over the Fixed Point Set of a Nonexpansive Mapping
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- On Distributed Averaging Algorithms and Quantization Effects
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
- A block-iterative surrogate constraint splitting method for quadratic signal recovery
- A projection method for approximating fixed points of quasi nonexpansive mappings without the usual demiclosedness condition
- A Convergent Incremental Gradient Method with a Constant Step Size
- Weak convergence of the sequence of successive approximations for nonexpansive mappings
- Convex Analysis
- A Weak-to-Strong Convergence Principle for Fejér-Monotone Methods in Hilbert Spaces
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings