An overview of stochastic quasi-Newton methods for large-scale machine learning
From MaRDI portal
Publication:6097379
DOI10.1007/s40305-023-00453-9zbMath1524.90220OpenAlexW4321850195MaRDI QIDQ6097379
Cong-Ying Han, Tian-de Guo, Yan Liu
Publication date: 5 June 2023
Published in: Journal of the Operations Research Society of China (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s40305-023-00453-9
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Applications of mathematical programming (90C90) Methods of quasi-Newton type (90C53)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An optimal method for stochastic composite optimization
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- Sample size selection in optimization methods for machine learning
- On the limited memory BFGS method for large scale optimization
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Sub-sampled Newton methods
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Limited-memory BFGS with displacement aggregation
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Stochastic quasi-Newton with line-search regularisation
- On the Global Convergence of the BFGS Method for Nonconvex Unconstrained Optimization Problems
- Global Convergence of Online Limited Memory BFGS
- On the Influence of Momentum Acceleration on Online Learning
- A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
- Probabilistic Interpretation of Linear Solvers
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Sizing and Least-Change Secant Methods
- Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent
- Updating Quasi-Newton Matrices with Limited Storage
- Algorithms for nonlinear constraints that use lagrangian functions
- Adaptive Sampling Strategies for Stochastic Optimization
- RES: Regularized Stochastic BFGS Algorithm
- Semi-stochastic coordinate descent
- Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms
- First-Order Methods in Optimization
- Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
- Probabilistic Line Searches for Stochastic Optimization
- Block BFGS Methods
- Optimization Methods for Large-Scale Machine Learning
- A Theoretical and Experimental Study of the Symmetric Rank-One Update
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A robust multi-batch L-BFGS method for machine learning
- A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization
- Quasi-Newton methods for machine learning: forget the past, just sample
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis
- An investigation of Newton-Sketch and subsampled Newton methods
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Analysis of the BFGS Method with Errors
- Adaptive, Limited-Memory BFGS Algorithms for Unconstrained Optimization
- A Rapidly Convergent Descent Method for Minimization
- Quasi-Newton Methods and their Application to Function Minimisation
- A Family of Variable-Metric Methods Derived by Variational Means
- Variations on Variable-Metric Methods
- A new approach to variable metric algorithms
- Conditioning of Quasi-Newton Methods for Function Minimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix
- Methods of conjugate gradients for solving linear systems
- A Stochastic Approximation Method
- Exact and inexact subsampled Newton methods for optimization
- A distributed optimisation framework combining natural gradient with Hessian-free for discriminative sequence training