Parallel computing subgradient method for nonsmooth convex optimization over the intersection of fixed point sets of nonexpansive mappings
DOI10.1186/s13663-015-0319-0zbMath1338.65161OpenAlexW2157566431WikidataQ59404220 ScholiaQ59404220MaRDI QIDQ288530
Publication date: 26 May 2016
Published in: Fixed Point Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1186/s13663-015-0319-0
fixed pointnonexpansive mappingparallel algorithmsubgradientKrasnosel'skiĭ-Mann algorithmnonsmooth convex optimization
Numerical mathematical programming methods (65K05) Convex programming (90C25) Applications of mathematical programming (90C90)
Related Items (3)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems
- Iterative algorithm for solving triple-hierarchical constrained optimization problem
- Distributed multi-agent optimization with state-dependent communication
- Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- VI-constrained hemivariational inequalities: distributed algorithms and power control in ad-hoc networks
- A viscosity method with no spectral radius requirements for the split common fixed point problem
- Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping
- Hard-constrained inconsistent signal feasibility problems
- Proximal Splitting Methods in Signal Processing
- Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings
- Computational Method for Solving a Stochastic Linear-Quadratic Control Problem Given an Unsolvable Stochastic Algebraic Riccati Equation
- A proximal decomposition method for solving convex variational inverse problems
- A Use of Conjugate Gradient Direction for the Convex Optimization Problem over the Fixed Point Set of a Nonexpansive Mapping
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- On Projection Algorithms for Solving Convex Feasibility Problems
- Iterative Algorithm for Triple-Hierarchical Constrained Nonconvex Optimization Problem and Its Application to Network Bandwidth Allocation
- Distributed Subgradient Methods for Multi-Agent Optimization
- On Distributed Averaging Algorithms and Quantization Effects
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
- A block-iterative surrogate constraint splitting method for quadratic signal recovery
- Weak convergence of the sequence of successive approximations for nonexpansive mappings
- Convex Analysis
- Mean Value Methods in Iteration
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Parallel computing subgradient method for nonsmooth convex optimization over the intersection of fixed point sets of nonexpansive mappings