Accelerated Distributed Nesterov Gradient Descent
From MaRDI portal
Publication:5125679
DOI10.1109/TAC.2019.2937496OpenAlexW3102661755MaRDI QIDQ5125679
Publication date: 7 October 2020
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1705.07176
Related Items (15)
Distributed adaptive Newton methods with global superlinear convergence ⋮ Blended dynamics approach to distributed optimization: sum convexity and convergence rate ⋮ Zeroth-order algorithms for stochastic distributed nonconvex optimization ⋮ Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs ⋮ Accelerated gradient boosting ⋮ Distributed optimization under edge agreements: a continuous-time algorithm ⋮ ET-PDA: an event-triggered parameter distributed accelerated algorithm for economic dispatch problems ⋮ Predefined-time optimization for distributed resource allocation ⋮ Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization ⋮ Recent theoretical advances in decentralized distributed convex optimization ⋮ Hybrid online learning control in networked multiagent systems: A survey ⋮ Projected subgradient based distributed convex optimization with transmission noises ⋮ An accelerated distributed gradient method with local memory ⋮ A dual approach for optimal algorithms in distributed optimization over networks ⋮ On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
This page was built for publication: Accelerated Distributed Nesterov Gradient Descent