Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

From MaRDI portal
Publication:5737735

DOI10.1137/15M1053141zbMath1365.90182arXiv1607.01231OpenAlexW2964303576MaRDI QIDQ5737735

Xiao Wang, Wei Liu, Donald Goldfarb, Shi-Qian Ma

Publication date: 30 May 2017

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1607.01231



Related Items

A Trust-region Method for Nonsmooth Nonconvex Optimization, An adaptive Hessian approximated stochastic gradient MCMC method, A fully stochastic second-order trust region method, Limited-memory BFGS with displacement aggregation, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, QNG: A Quasi-Natural Gradient Method for Large-Scale Statistical Learning, Sketch-based empirical natural gradient methods for deep learning, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models, slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks, An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians, A framework for parallel second order incremental optimization algorithms for solving partially separable problems, Adaptive stochastic approximation algorithm, A Riemannian subspace BFGS trust region method, An overview of stochastic quasi-Newton methods for large-scale machine learning, On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis, A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning, Proximal quasi-Newton method for composite optimization over the Stiefel manifold, A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations, Differentially private inference via noisy optimization, A single timescale stochastic quasi-Newton method for stochastic optimization, Epi-Regularization of Risk Measures, Secant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant condition, Stochastic variance reduced gradient methods using a trust-region-like scheme, IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate, Dynamic stochastic approximation for multi-stage stochastic optimization, Stochastic proximal quasi-Newton methods for non-convex composite optimization, Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization, A robust multi-batch L-BFGS method for machine learning, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization, Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity, Unnamed Item, Structure-preserving deep learning, LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums


Uses Software


Cites Work