On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning

From MaRDI portal
Revision as of 21:50, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3105787

DOI10.1137/10079923XzbMath1245.65062OpenAlexW1991083751MaRDI QIDQ3105787

Will Neveitt, Byrd, Richard H., Nocedal, Jorge, Gillian M. Chin

Publication date: 9 January 2012

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/10079923x




Related Items (51)

On data preconditioning for regularized loss minimizationClustering-based preconditioning for stochastic programsProbabilistic learning inference of boundary value problem with uncertainties based on Kullback-Leibler divergence under implicit constraintsQuasi-Newton methods for machine learning: forget the past, just sampleDescent direction method with line search for unconstrained optimization in noisy environmentA stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationUnnamed ItemA nonmonotone line search method for stochastic optimization problemsPredictive coarse-grainingSCORE: approximating curvature information under self-concordant regularizationAdaptive stochastic approximation algorithmAn overview of stochastic quasi-Newton methods for large-scale machine learningNewton-MR: inexact Newton method with minimum residual sub-problem solverGeneralized linear models for massive data via doubly-sketchingUnnamed ItemNonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail NoiseHessian averaging in stochastic Newton methods achieves superlinear convergenceNewton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic ConvergenceDiscriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functionsStable architectures for deep neural networksSubsampled Hessian Newton Methods for Supervised LearningEntropy-based closure for probabilistic learning on manifoldsParallel Optimization Techniques for Machine LearningConvergence of Newton-MR under Inexact Hessian InformationSub-sampled Newton methodsSpectral projected gradient method for stochastic optimizationFast Approximation of the Gauss--Newton Hessian Matrix for the Multilayer PerceptronProbabilistic learning on manifolds constrained by nonlinear partial differential equations for small datasetsOptimization Methods for Large-Scale Machine LearningDistributed Newton Methods for Deep Neural NetworksDesign optimization under uncertainties of a mesoscale implant in biological tissues using a probabilistic learning algorithmStochastic Quasi-Newton Methods for Nonconvex Stochastic OptimizationRobust inversion, dimensionality reduction, and randomized samplingSample size selection in optimization methods for machine learningNonlinear optimization and support vector machinesNonlinear optimization and support vector machinesA Stochastic Quasi-Newton Method for Large-Scale OptimizationStochastic sub-sampled Newton method with variance reductionCompact representations of structured BFGS matricesA robust multi-batch L-BFGS method for machine learningParallel Simultaneous Perturbation OptimizationAn Inertial Newton Algorithm for Deep LearningUnnamed ItemA Stochastic Semismooth Newton Method for Nonsmooth Nonconvex OptimizationOn the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimizationLinesearch Newton-CG methods for convex optimization with noiseUnnamed ItemNonmonotone line search methods with variable sample sizeUnnamed ItemLSOS: Line-search second-order stochastic optimization methods for nonconvex finite sumsNewton-like Method with Diagonal Correction for Distributed Optimization


Uses Software






This page was built for publication: On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning