Convergence of a distributed method for minimizing sum of convex functions with fixed point constraints
DOI10.1186/S13660-021-02734-4zbMATH Open1490.90220OpenAlexW4200275590MaRDI QIDQ2073009FDOQ2073009
Authors: Nawarat Ekkarntrong, Tipsuda Arunrat, Nimit Nimana
Publication date: 26 January 2022
Published in: Journal of Inequalities and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1186/s13660-021-02734-4
Recommendations
- Iterative methods for parallel convex optimization with fixed point constraints
- Incremental subgradient method for nonsmooth convex optimization with fixed point constraints
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Incremental proximal method for nonsmooth convex optimization with fixed point constraints of quasi-nonexpansive mappings
- Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings
Numerical optimization and variational techniques (65K10) Convex programming (90C25) Numerical methods based on nonlinear programming (49M37) Monotone operators and generalizations (47H05)
Cites Work
- First-order methods in optimization
- The Gradient Projection Method for Nonlinear Programming. Part II. Nonlinear Constraints
- The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings
- Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization
- Convergence analysis for proximal split feasibility problems and fixed point problems
- Iterative methods for fixed point problems in Hilbert spaces
- Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation
- Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
- Title not available (Why is that?)
- Robust Wideband Beamforming by the Hybrid Steepest Descent Method
- Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasi-nonexpansive mappings
- Computational method for solving a stochastic linear-quadratic control problem given an unsolvable stochastic algebraic Riccati equation
- Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
- Convex analysis and monotone operator theory in Hilbert spaces
- Proximal point algorithms for nonsmooth convex optimization with fixed point constraints
- Extension of Fenchel's duality theorem for convex functions
- An efficient iterative method for finding common fixed point and variational inequalities in Hilbert spaces
- Distributed Optimization for Network Resource Allocation With Nonsmooth Utility Functions
- A descent SQP alternating direction method for minimizing the sum of three convex functions
- Extrapolated sequential constraint method for variational inequality over the intersection of fixed-point sets
- Hierarchical Convex Optimization With Primal-Dual Splitting
- Iterative methods for parallel convex optimization with fixed point constraints
Cited In (4)
- A dynamic distributed conjugate gradient method for variational inequality problem over the common fixed-point constraints
- A Distributed Algorithm for Computing a Common Fixed Point of a Finite Family of Paracontractions
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- EFIX: exact fixed point methods for distributed optimization
This page was built for publication: Convergence of a distributed method for minimizing sum of convex functions with fixed point constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2073009)