A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning
From MaRDI portal
Publication:6138300
DOI10.1007/s12190-022-01800-4zbMath1518.90074MaRDI QIDQ6138300
Publication date: 5 September 2023
Published in: Journal of Applied Mathematics and Computing (Search for Journal in Brave)
Cites Work
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Minimizing finite sums with the stochastic average gradient
- Convergence analysis of a modified BFGS method on convex minimizations
- New quasi-Newton equation and related methods for unconstrained optimization
- The superlinear convergence of a modified BFGS-type method for unconstrained optimization
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Accelerated, Parallel, and Proximal Coordinate Descent
- Robust Stochastic Approximation Approach to Stochastic Programming
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- A Stochastic Approximation Method
- Probability
This page was built for publication: A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning