Stochastic sub-sampled Newton method with variance reduction
DOI10.1142/S0219691319500413zbMATH Open1433.62175OpenAlexW2945296329MaRDI QIDQ5204645FDOQ5204645
Authors: Zhijian Luo, Yun-Tao Qian
Publication date: 5 December 2019
Published in: International Journal of Wavelets, Multiresolution and Information Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219691319500413
Recommendations
Numerical mathematical programming methods (65K05) Learning and adaptive systems in artificial intelligence (68T05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Image analysis in multivariate analysis (62H35)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- A stochastic quasi-Newton method for large-scale optimization
- A Stochastic Approximation Method
- On the use of stochastic Hessian information in optimization methods for machine learning
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Efficiency of coordinate descent methods on huge-scale optimization problems
- A proximal stochastic gradient method with progressive variance reduction
- Semi-stochastic coordinate descent
- Margin maximization in spherical separation
- Optimization Methods for Large-Scale Machine Learning
- Sketching as a tool for numerical linear algebra
- Title not available (Why is that?)
- Riemannian metrics for neural networks I: feedforward networks
- Title not available (Why is that?)
Cited In (2)
Uses Software
This page was built for publication: Stochastic sub-sampled Newton method with variance reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5204645)