Graph-dependent implicit regularisation for distributed stochastic subgradient descent

From MaRDI portal
Publication:4969072

zbMATH Open1498.68261arXiv1809.06958MaRDI QIDQ4969072FDOQ4969072

Dominic Richards, Patrick Rebeschini

Publication date: 5 October 2020

Abstract: We propose graph-dependent implicit regularisation strategies for distributed stochastic subgradient descent (Distributed SGD) for convex problems in multi-agent learning. Under the standard assumptions of convexity, Lipschitz continuity, and smoothness, we establish statistical learning rates that retain, up to logarithmic terms, centralised statistical guarantees through implicit regularisation (step size tuning and early stopping) with appropriate dependence on the graph topology. Our approach avoids the need for explicit regularisation in decentralised learning problems, such as adding constraints to the empirical risk minimisation rule. Particularly for distributed methods, the use of implicit regularisation allows the algorithm to remain simple, without projections or dual methods. To prove our results, we establish graph-independent generalisation bounds for Distributed SGD that match the centralised setting (using algorithmic stability), and we establish graph-dependent optimisation bounds that are of independent interest. We present numerical experiments to show that the qualitative nature of the upper bounds we derive can be representative of real behaviours.


Full work available at URL: https://arxiv.org/abs/1809.06958




Recommendations




Cites Work


Cited In (3)

Uses Software





This page was built for publication: Graph-dependent implicit regularisation for distributed stochastic subgradient descent

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969072)