Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization
From MaRDI portal
Publication:6072513
DOI10.1016/j.neunet.2022.02.012OpenAlexW4213066100WikidataQ114662296 ScholiaQ114662296MaRDI QIDQ6072513
Publication date: 13 October 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2022.02.012
Statistics (62-XX) Learning and adaptive systems in artificial intelligence (68T05) Operations research, mathematical programming (90-XX)
Related Items (1)
Cites Work
- Lectures on convex optimization
- Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Neural Network Learning as an Inverse Problem
- A Stochastic Approximation Method
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization