Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
From MaRDI portal
Publication:5737735
DOI10.1137/15M1053141zbMath1365.90182arXiv1607.01231OpenAlexW2964303576MaRDI QIDQ5737735
Xiao Wang, Wei Liu, Donald Goldfarb, Shi-Qian Ma
Publication date: 30 May 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1607.01231
stochastic approximationvariance reductionquasi-Newton methodnonconvex stochastic optimizationdamped limited-memory BFGS method
Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items
A Trust-region Method for Nonsmooth Nonconvex Optimization, An adaptive Hessian approximated stochastic gradient MCMC method, A fully stochastic second-order trust region method, Limited-memory BFGS with displacement aggregation, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, QNG: A Quasi-Natural Gradient Method for Large-Scale Statistical Learning, Sketch-based empirical natural gradient methods for deep learning, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models, slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks, An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians, A framework for parallel second order incremental optimization algorithms for solving partially separable problems, Adaptive stochastic approximation algorithm, A Riemannian subspace BFGS trust region method, An overview of stochastic quasi-Newton methods for large-scale machine learning, On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis, A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning, Proximal quasi-Newton method for composite optimization over the Stiefel manifold, A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations, Differentially private inference via noisy optimization, A single timescale stochastic quasi-Newton method for stochastic optimization, Epi-Regularization of Risk Measures, Secant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant condition, Stochastic variance reduced gradient methods using a trust-region-like scheme, IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate, Dynamic stochastic approximation for multi-stage stochastic optimization, Stochastic proximal quasi-Newton methods for non-convex composite optimization, Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization, A robust multi-batch L-BFGS method for machine learning, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization, Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity, Unnamed Item, Structure-preserving deep learning, LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An optimal method for stochastic composite optimization
- Validation analysis of mirror descent stochastic approximation method
- On the limited memory BFGS method for large scale optimization
- Learning by mirror averaging
- Introductory lectures on convex optimization. A basic course.
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Convergence theory for nonconvex stochastic programming with an application to mixed logit
- Global Convergence of Online Limited Memory BFGS
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
- Penalty methods with stochastic approximation for stochastic nonlinear programming
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Robust Stochastic Approximation Approach to Stochastic Programming
- stochastic quasigradient methods and their application to system optimization†
- A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems
- Acceleration of Stochastic Approximation by Averaging
- RES: Regularized Stochastic BFGS Algorithm
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Understanding Machine Learning
- A Family of Variable-Metric Methods Derived by Variational Means
- A new approach to variable metric algorithms
- The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
- Conditioning of Quasi-Newton Methods for Function Minimization
- Asymptotic Distribution of Stochastic Approximation Procedures
- A Stochastic Approximation Method
- On a Stochastic Approximation Method
- Probability
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization