Distributed decision-coupled constrained optimization via proximal-tracking
From MaRDI portal
Publication:2059328
DOI10.1016/j.automatica.2021.109938zbMath1479.49076OpenAlexW3205968963MaRDI QIDQ2059328
Maria Prandini, Alessandro Falsone
Publication date: 14 December 2021
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2021.109938
Related Items (2)
Augmented Lagrangian tracking for distributed optimization with equality and inequality coupling constraints ⋮ Online distributed optimization with strongly pseudoconvex-sum cost functions and coupled inequality constraints
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed strategies for generating weight-balanced and doubly stochastic digraphs
- Discrete-time dynamic average consensus
- Tracking-ADMM for distributed constraint-coupled optimization
- Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
- Newton-Raphson Consensus for Distributed Convex Optimization
- Linear Convergence Rate of a Class of Distributed Augmented Lagrangian Algorithms
- Distributed Optimization Over Time-Varying Directed Graphs
- Convergence Rate of Distributed ADMM Over Networks
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
- Decentralized Dynamic Optimization Through the Alternating Direction Method of Multipliers
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Decentralized Proximal Gradient Algorithms With Linear Convergence Rates
- Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms
- Multiagent Newton–Raphson Optimization Over Lossy Networks
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- On Distributed Convex Optimization Under Inequality and Equality Constraints
- ADD-OPT: Accelerated Distributed Directed Optimization
- Distributed constrained optimization and consensus in uncertain networks via proximal minimization
- Convex Analysis
This page was built for publication: Distributed decision-coupled constrained optimization via proximal-tracking