RES: Regularized Stochastic BFGS Algorithm
From MaRDI portal
Publication:4579592
DOI10.1109/TSP.2014.2357775zbMATH Open1394.94405arXiv1401.7625WikidataQ30039871 ScholiaQ30039871MaRDI QIDQ4579592FDOQ4579592
Authors: Aryan Mokhtari, Alejandro Ribeiro
Publication date: 22 August 2018
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Abstract: RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Full work available at URL: https://arxiv.org/abs/1401.7625
Cited In (28)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- A class of parallel doubly stochastic algorithms for large-scale learning
- On the complexity of a stochastic Levenberg-Marquardt method
- Newton-like method with diagonal correction for distributed optimization
- Second-order stochastic optimization for machine learning in linear time
- The regularized stochastic Nesterov's accelerated quasi-Newton method with applications
- Stochastic quasi-Newton with line-search regularisation
- Optimization methods for large-scale machine learning
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A globally convergent incremental Newton method
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- On the inversion-free Newton's method and its applications
- An efficient stochastic Newton algorithm for parameter estimation in logistic regressions
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- Title not available (Why is that?)
- A stochastic trust region method for unconstrained optimization problems
- A single timescale stochastic quasi-Newton method for stochastic optimization
- An efficient averaged stochastic Gauss-Newton algorithm for estimating parameters of nonlinear regressions models
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- Bolstering stochastic gradient descent with model building
- Parsimonious online learning with kernels via sparse projections in function space
- A stochastic quasi-Newton method for large-scale optimization
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
This page was built for publication: RES: Regularized Stochastic BFGS Algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4579592)