Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
From MaRDI portal
Publication:6060563
DOI10.1007/s10287-023-00479-7arXiv2307.00392MaRDI QIDQ6060563
Aleksandr Beznosikov, Georgiy Konin, Andrew Veprikov, Dmitry P. Kovalev, Aleksandr Lobanov, A. V. Gasnikov
Publication date: 3 November 2023
Published in: Computational Management Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2307.00392
time-varying graphsgradient-free algorithmsnon-smooth opimizationstochastic accelerated decentralized optimization method
Cites Work
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods and dual decomposition in distributed model predictive control
- Optimal order of accuracy of search algorithms in stochastic optimization
- Distributed average consensus with least-mean-square deviation
- Estimating time-varying networks
- An accelerated directional derivative method for smooth stochastic convex optimization
- Noisy zeroth-order optimization for non-smooth saddle point problems
- Improved exploitation of higher order smoothness in derivative-free optimization
- Coordinate descent algorithms
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Average Consensus on Arbitrary Strongly Connected Digraphs With Time-Varying Topologies
- Introduction to Derivative-Free Optimization
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Derivative-Free and Blackbox Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Kernel-based methods for bandit convex optimization
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Accelerated Distributed Nesterov Gradient Descent
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Optimal Distributed Online Prediction using Mini-Batches
- A Stochastic Approximation Method
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks
This page was built for publication: Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs