A Stochastic Quasi-Newton Method for Large-Scale Optimization
From MaRDI portal
DOI10.1137/140954362zbMath1382.65166arXiv1401.7020OpenAlexW2963941964MaRDI QIDQ121136
J. Nocedal, S. L. Hansen, Y. Singer, R. H. Byrd
Publication date: 27 January 2014
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1401.7020
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53) Methods of successive quadratic programming type (90C55)
Related Items (80)
A decoupling approach for time-dependent robust optimization with application to power semiconductor devices ⋮ Managing randomization in the multi-block alternating direction method of multipliers for quadratic optimization ⋮ Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications ⋮ An adaptive Hessian approximated stochastic gradient MCMC method ⋮ A fully stochastic second-order trust region method ⋮ Quasi-Newton methods for machine learning: forget the past, just sample ⋮ Limited-memory BFGS with displacement aggregation ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ A Stochastic Second-Order Generalized Estimating Equations Approach for Estimating Association Parameters ⋮ QNG: A Quasi-Natural Gradient Method for Large-Scale Statistical Learning ⋮ A new robust class of skew elliptical distributions ⋮ Sketch-based empirical natural gradient methods for deep learning ⋮ A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization ⋮ A New Likelihood Ratio Method for Training Artificial Neural Networks ⋮ Two-dimensional distribution of streamwise velocity in open channel flow using maximum entropy principle: incorporation of additional constraints based on conservation laws ⋮ slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks ⋮ Towards explicit superlinear convergence rate for SR1 ⋮ A framework for parallel second order incremental optimization algorithms for solving partially separable problems ⋮ Efficient learning rate adaptation based on hierarchical optimization approach ⋮ Adaptive stochastic approximation algorithm ⋮ A stochastic variance reduced gradient using Barzilai-Borwein techniques as second order information ⋮ An overview of stochastic quasi-Newton methods for large-scale machine learning ⋮ Two-stage 2D-to-3d reconstruction of realistic microstructures: implementation and numerical validation by effective properties ⋮ Inexact restoration with subsampled trust-region methods for finite-sum minimization ⋮ On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis ⋮ Adaptive step size rules for stochastic optimization in large-scale learning ⋮ A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning ⋮ On the complexity of a stochastic Levenberg-Marquardt method ⋮ The regularized stochastic Nesterov's accelerated quasi-Newton method with applications ⋮ On pseudoinverse-free block maximum residual nonlinear Kaczmarz method for solving large-scale nonlinear system of equations ⋮ Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations ⋮ Riemannian Natural Gradient Methods ⋮ A single timescale stochastic quasi-Newton method for stochastic optimization ⋮ Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence ⋮ Secant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant condition ⋮ On the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versions ⋮ Use of projective coordinate descent in the Fekete problem ⋮ Convergence of Inexact Forward--Backward Algorithms Using the Forward--Backward Envelope ⋮ Spectral projected gradient method for stochastic optimization ⋮ Block BFGS Methods ⋮ Unnamed Item ⋮ Quasi-Newton methods: superlinear convergence without line searches for self-concordant functions ⋮ Quasi-Newton smoothed functional algorithms for unconstrained and constrained simulation optimization ⋮ Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization ⋮ Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization ⋮ A Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic Programming ⋮ Unnamed Item ⋮ IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate ⋮ An information based approach to stochastic control problems ⋮ Unnamed Item ⋮ A variation of Broyden class methods using Householder adaptive transforms ⋮ Efficient computation of derivatives for solving optimization problems in R and Python using SWIG-generated interfaces to ADOL-C ⋮ Stochastic proximal quasi-Newton methods for non-convex composite optimization ⋮ stochQN ⋮ Stochastic sub-sampled Newton method with variance reduction ⋮ Evolutionary prediction of nonstationary event popularity dynamics of Weibo social network using time-series characteristics ⋮ Analysis of the BFGS Method with Errors ⋮ An Efficient Stochastic Newton Algorithm for Parameter Estimation in Logistic Regressions ⋮ Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization ⋮ A robust multi-batch L-BFGS method for machine learning ⋮ Sampled Tikhonov regularization for large linear inverse problems ⋮ Going Off the Grid: Iterative Model Selection for Biclustered Matrix Completion ⋮ An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration ⋮ An Inertial Newton Algorithm for Deep Learning ⋮ A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization ⋮ On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization ⋮ Generalized self-concordant functions: a recipe for Newton-type methods ⋮ Sampled limited memory methods for massive linear inverse problems ⋮ Kalman-Based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning ⋮ SABRINA: a stochastic subspace majorization-minimization algorithm ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ A subsampling approach for Bayesian model selection ⋮ Unnamed Item ⋮ A globally convergent incremental Newton method ⋮ A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization ⋮ A hybrid stochastic optimization framework for composite nonconvex optimization ⋮ LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums ⋮ Newton-like Method with Diagonal Correction for Distributed Optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Sample size selection in optimization methods for machine learning
- A coordinate gradient descent method for nonsmooth separable minimization
- A stochastic approximation algorithm with step-size adaptation
- Stochastic simulation: Algorithms and analysis
- Global Convergence of Online Limited Memory BFGS
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Robust Stochastic Approximation Approach to Stochastic Programming
- Numerical Optimization
- RES: Regularized Stochastic BFGS Algorithm
- A Stochastic Approximation Method
This page was built for publication: A Stochastic Quasi-Newton Method for Large-Scale Optimization