RES: Regularized Stochastic BFGS Algorithm
From MaRDI portal
Publication:4579592
Abstract: RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Cited in
(28)- scientific article; zbMATH DE number 7415103 (Why is no real title available?)
- A class of parallel doubly stochastic algorithms for large-scale learning
- On the complexity of a stochastic Levenberg-Marquardt method
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- The regularized stochastic Nesterov's accelerated quasi-Newton method with applications
- Stochastic quasi-Newton with line-search regularisation
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- A stochastic trust region method for unconstrained optimization problems
- On the inversion-free Newton's method and its applications
- Optimization methods for large-scale machine learning
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A stochastic quasi-Newton method for large-scale optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- An efficient stochastic Newton algorithm for parameter estimation in logistic regressions
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- A globally convergent incremental Newton method
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- A single timescale stochastic quasi-Newton method for stochastic optimization
- Bolstering stochastic gradient descent with model building
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Newton-like method with diagonal correction for distributed optimization
- Parsimonious online learning with kernels via sparse projections in function space
- An efficient averaged stochastic Gauss-Newton algorithm for estimating parameters of nonlinear regressions models
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- Second-order stochastic optimization for machine learning in linear time
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
This page was built for publication: RES: Regularized Stochastic BFGS Algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4579592)