DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
From MaRDI portal
Publication:2810864
zbMath1360.68699arXiv1506.04216MaRDI QIDQ2810864
Alejandro Ribeiro, Aryan Mokhtari
Publication date: 6 June 2016
Full work available at URL: https://arxiv.org/abs/1506.04216
stochastic optimizationlarge-scale optimizationlinear convergencelogistic regressiondecentralized optimizationstochastic averaging gradient
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15)
Related Items (21)
Optimal Algorithms for Non-Smooth Distributed Optimization in Networks ⋮ Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent ⋮ Primal-dual algorithm for distributed constrained optimization ⋮ Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm ⋮ Linear convergence of primal-dual gradient methods and their performance in distributed optimization ⋮ A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization ⋮ Multi-cluster distributed optimization via random sleep strategy ⋮ Decentralized learning over a network with Nyström approximation using SGD ⋮ Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs ⋮ Revisiting EXTRA for Smooth Distributed Optimization ⋮ Decentralized Consensus Algorithm with Delayed and Stochastic Gradients ⋮ Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate ⋮ An Optimal Algorithm for Decentralized Finite-Sum Optimization ⋮ Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method ⋮ Fully asynchronous policy evaluation in distributed reinforcement learning over networks ⋮ A randomized incremental primal-dual method for decentralized consensus optimization ⋮ Dualize, split, randomize: toward fast nonsmooth optimization algorithms ⋮ Unnamed Item ⋮ Primal-dual stochastic distributed algorithm for constrained convex optimization ⋮ On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation ⋮ Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
This page was built for publication: DSA: Decentralized Double Stochastic Averaging Gradient Algorithm