Subsampled Hessian Newton Methods for Supervised Learning
DOI10.1162/NECO_A_00751zbMATH Open1472.68162DBLPjournals/neco/WangHL15OpenAlexW2103346443WikidataQ40830680 ScholiaQ40830680MaRDI QIDQ5380307FDOQ5380307
Authors: Chien-Chih Wang, Chun-Heng Huang, Chih-Jen Lin
Publication date: 4 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00751
Recommendations
- Sub-sampled Newton methods
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- Exact and inexact subsampled Newton methods for optimization
- Adaptive iterative Hessian sketch via \(A\)-optimal subsampling
- Stochastic sub-sampled Newton method with variance reduction
- Subsampled nonmonotone spectral gradient methods
- Sublinear optimization for machine learning
- Subgradient and sampling algorithms for \(\ell_1\) regression
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- Optimal subsampling for softmax regression
Learning and adaptive systems in artificial intelligence (68T05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06)
Cites Work
- LIBLINEAR: a library for large linear classification
- Title not available (Why is that?)
- On the use of stochastic Hessian information in optimization methods for machine learning
- Trust region Newton method for logistic regression
- Title not available (Why is that?)
- Trust Region Methods
- A modified finite Newton method for fast solution of large scale linear SVMs
- Title not available (Why is that?)
- Performance of first-order methods for smooth convex minimization: a novel approach
- Some methods of speeding up the convergence of iteration methods
- Title not available (Why is that?)
- A finite newton method for classification
- Iterative scaling and coordinate descent methods for maximum entropy models
- A simple automatic derivative evaluation program
- Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent
Cited In (5)
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- On the use of stochastic Hessian information in optimization methods for machine learning
- Sub-sampled Newton methods
- Distributed Newton Methods for Deep Neural Networks
- Adaptive iterative Hessian sketch via \(A\)-optimal subsampling
Uses Software
This page was built for publication: Subsampled Hessian Newton Methods for Supervised Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380307)