Incremental subgradient method for nonsmooth convex optimization with fixed point constraints
From MaRDI portal
Publication:2829569
DOI10.1080/10556788.2016.1175002zbMath1354.65124OpenAlexW2346898445MaRDI QIDQ2829569
Publication date: 8 November 2016
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2016.1175002
convergencenumerical examplefixed pointnonexpansive mappingHilbert spacesubdifferentialKrasnosel'skiĭ-Mann algorithmnonsmooth convex optimizationincremental subgradient method
Numerical mathematical programming methods (65K05) Convex programming (90C25) Applications of mathematical programming (90C90)
Related Items (5)
Proximal point algorithms for nonsmooth convex optimization with fixed point constraints ⋮ Two stochastic optimization algorithms for convex optimization with fixed point constraints ⋮ Fixed point quasiconvex subgradient method ⋮ Path-based incremental target level algorithm on Riemannian manifolds ⋮ Iterative methods for parallel convex optimization with fixed point constraints
Cites Work
- Unnamed Item
- Unnamed Item
- Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings
- Iterative algorithm for solving triple-hierarchical constrained optimization problem
- The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- VI-constrained hemivariational inequalities: distributed algorithms and power control in ad-hoc networks
- Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Computational Method for Solving a Stochastic Linear-Quadratic Control Problem Given an Unsolvable Stochastic Algebraic Riccati Equation
- A proximal decomposition method for solving convex variational inverse problems
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- A Use of Conjugate Gradient Direction for the Convex Optimization Problem over the Fixed Point Set of a Nonexpansive Mapping
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- On Projection Algorithms for Solving Convex Feasibility Problems
- Iterative Algorithm for Triple-Hierarchical Constrained Nonconvex Optimization Problem and Its Application to Network Bandwidth Allocation
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
- A block-iterative surrogate constraint splitting method for quadratic signal recovery
- A Convergent Incremental Gradient Method with a Constant Step Size
- Weak convergence of the sequence of successive approximations for nonexpansive mappings
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Incremental subgradient method for nonsmooth convex optimization with fixed point constraints