Distributed multi-agent optimization with state-dependent communication

From MaRDI portal
Publication:644903

DOI10.1007/S10107-011-0467-XzbMATH Open1229.90201arXiv1004.0969OpenAlexW2136517470MaRDI QIDQ644903FDOQ644903


Authors: Ilan Lobel, Asuman Ozdaglar, Diego Feijer Edit this on Wikidata


Publication date: 7 November 2011

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. In this paper, we study a projected multi-agent subgradient algorithm under state-dependent communication. The algorithm involves each agent performing a local averaging to combine his estimate with the other agents' estimates, taking a subgradient step along his local objective function, and projecting the estimates on his local constraint set. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a "disagreement metric" between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.


Full work available at URL: https://arxiv.org/abs/1004.0969




Recommendations




Cites Work


Cited In (33)





This page was built for publication: Distributed multi-agent optimization with state-dependent communication

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q644903)