Stochastic sub-sampled Newton method with variance reduction
From MaRDI portal
Publication:5204645
Recommendations
Cites work
- scientific article; zbMATH DE number 7306852 (Why is no real title available?)
- A Stochastic Approximation Method
- A proximal stochastic gradient method with progressive variance reduction
- A stochastic quasi-Newton method for large-scale optimization
- Adaptive subgradient methods for online learning and stochastic optimization
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Margin maximization in spherical separation
- On the use of stochastic Hessian information in optimization methods for machine learning
- Optimization methods for large-scale machine learning
- Riemannian metrics for neural networks. I: Feedforward networks
- Second-order stochastic optimization for machine learning in linear time
- Semi-stochastic coordinate descent
- Sketching as a tool for numerical linear algebra
Cited in
(8)- Subsampled Hessian Newton Methods for Supervised Learning
- Approximate Newton methods
- Sketched Newton-Raphson
- Nesterov's acceleration for approximate Newton
- On the use of stochastic Hessian information in optimization methods for machine learning
- A stochastic variance reduced gradient using Barzilai-Borwein techniques as second order information
- Second-order stochastic optimization for machine learning in linear time
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
This page was built for publication: Stochastic sub-sampled Newton method with variance reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5204645)