Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
From MaRDI portal
Publication:6053598
DOI10.1134/s0965542523090026arXiv2211.10783OpenAlexW4387620276MaRDI QIDQ6053598
D. M. Dvinskikh, A. V. Gasnikov, B. A. Alashqar, A. V. Lobanov
Publication date: 19 October 2023
Published in: Computational Mathematics and Mathematical Physics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2211.10783
Related Items (2)
Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact ⋮ Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An optimal method for stochastic composite optimization
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex
- An accelerated directional derivative method for smooth stochastic convex optimization
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Randomized Smoothing for Stochastic Optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Advances and Open Problems in Federated Learning
- Finite Difference Gradient Approximation: To Randomize or Not?
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
This page was built for publication: Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems