Superlinearly Convergent Asynchronous Distributed Network Newton Method

From MaRDI portal
Publication:6286522

arXiv1705.03952MaRDI QIDQ6286522FDOQ6286522


Authors: Fatemeh Mansoori, Ermin Wei Edit this on Wikidata


Publication date: 10 May 2017

Abstract: The problem of minimizing a sum of local convex objective functions over a networked system captures many important applications and has received much attention in the distributed optimization field. Most of existing work focuses on development of fast distributed algorithms under the presence of a central clock. The only known algorithms with convergence guarantees for this problem in asynchronous setup could achieve either sublinear rate under totally asynchronous setting or linear rate under partially asynchronous setting (with bounded delay). In this work, we built upon existing literature to develop and analyze an asynchronous Newton based approach for solving a penalized version of the problem. We show that this algorithm converges almost surely with global linear rate and local superlinear rate in expectation. Numerical studies confirm superior performance against other existing asynchronous methods.













This page was built for publication: Superlinearly Convergent Asynchronous Distributed Network Newton Method

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6286522)