Second-Order Guarantees of Distributed Gradient Algorithms
From MaRDI portal
Publication:5131964
DOI10.1137/18M121784XzbMath1493.90141arXiv1809.08694OpenAlexW3093673559MaRDI QIDQ5131964
Amir Daneshmand, Gesualdo Scutari, Vyacheslav Kungurtsev
Publication date: 9 November 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1809.08694
Programming involving graphs or networks (90C35) Nonconvex programming, global optimization (90C26) Distributed algorithms (68W15)
Related Items
Decentralized nonconvex optimization with guaranteed privacy and accuracy ⋮ Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization ⋮ Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Nonconvergence to unstable points in urn models and stochastic approximations
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On gradients of functions definable in o-minimal structures
- Introductory lectures on convex optimization. A basic course.
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Parallel and distributed successive convex approximation methods for big-data optimization
- Perturbed proximal primal-dual algorithm for nonconvex nonsmooth optimization
- First-order methods almost always avoid strict saddle points
- Behavior of accelerated gradient methods near critical points of nonconvex functions
- Distributed nonconvex constrained optimization over time-varying digraphs
- Cubic regularization of Newton method and its global performance
- On the Convergence of Decentralized Gradient Descent
- Distributed Optimization Over Time-Varying Directed Graphs
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Computing a Trust Region Step
- On the global convergence of trust region algorithms for unconstrained minimization
- Recursive Stochastic Algorithms for Global Optimization in $\mathbb{R}^d $
- Accelerated Methods for NonConvex Optimization
- Non-Convex Distributed Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- On Nonconvex Decentralized Gradient Descent
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Finding approximate local minima faster than gradient descent
- Constrained Consensus and Optimization in Multi-Agent Networks
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- An Approximate Dual Subgradient Algorithm for Multi-Agent Non-Convex Optimization
- An Extrinsic Look at the Riemannian Hessian