Distributed multi-agent optimization with state-dependent communication (Q644903): Difference between revisions
From MaRDI portal
Latest revision as of 14:36, 4 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Distributed multi-agent optimization with state-dependent communication |
scientific article |
Statements
Distributed multi-agent optimization with state-dependent communication (English)
0 references
7 November 2011
0 references
The topic of this article is situated in one of the most challenging and -- for future real-world utilization -- very relevant fields of optimization and applied probability: global optimization by the help of distributed algorithms. In fact, it necessitates a lot of analytical, topological and numerical care. The authors investigate distributed algorithms for resolving problems of global optimization in which the target function is the sum of local target functions of agents and the feasible set is given by the intersection of local feasible sets of agents. They assume that every agent knows his own local target function and feasible set only, and that he exchanges information with the other agents over a randomly varying network topology for updating his state of information. They further suppose a state-dependent communication model over this topology, i.e., communication being Markovian with respect to the states of the agents and the probability with which the links are available depending on the agents. Under that state-dependent communication model, the authors study a projected multi-agent subgradient algorithm. The state-dependence submits significant challenges and couples the investigation of information exchange with the analysis of subgradient steps and projection errors. The authors first demonstrate that this algorithm, when employed with a constant stepsize, can result in the agent estimates to diverge almost surely. Provided some conditions on the sepsize sequence, they present convergence rate bounds on a ``disagreement metric'' between the agent estimates. These bounds are time-nonhomogeneous, i.e., they depend on the initial starting time. However, they show that the agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability 1, under different assumptions on the local feasible sets and on the sequence of stepsizes. This paper is well written and explained, and well structured by five sections and a small appendix, it is mathematically deep with its theorems, further results and proofs, and it contributes to theory and methodology. Special cases are addressed, and illustrations given. Certainly, further computational experience will be provided. In fact, future scientific work on advances of this work, in theory as well as in methods, and future real-world applications seem to be possible indeed and, actually, very useful.
0 references
nonlinear programming
0 references
stochastic network models
0 references
distributed multi-agent optimization
0 references
state-dependent communication
0 references
projected multi-agent subgradient algorithm
0 references
0 references
0 references