Global Convergence of Online Limited Memory BFGS
From MaRDI portal
Publication:2788401
zbMath1351.90124arXiv1409.2045MaRDI QIDQ2788401
Alejandro Ribeiro, Aryan Mokhtari
Publication date: 19 February 2016
Full work available at URL: https://arxiv.org/abs/1409.2045
Numerical optimization and variational techniques (65K10) Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15)
Related Items (26)
An adaptive Hessian approximated stochastic gradient MCMC method ⋮ Quasi-Newton methods for machine learning: forget the past, just sample ⋮ Limited-memory BFGS with displacement aggregation ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ QNG: A Quasi-Natural Gradient Method for Large-Scale Statistical Learning ⋮ A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization ⋮ Unnamed Item ⋮ Unnamed Item ⋮ An overview of stochastic quasi-Newton methods for large-scale machine learning ⋮ On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis ⋮ A single timescale stochastic quasi-Newton method for stochastic optimization ⋮ Secant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant condition ⋮ Spectral projected gradient method for stochastic optimization ⋮ Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization ⋮ IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate ⋮ A Stochastic Quasi-Newton Method for Large-Scale Optimization ⋮ Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning ⋮ A robust multi-batch L-BFGS method for machine learning ⋮ Sampled Tikhonov regularization for large linear inverse problems ⋮ An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration ⋮ Solving generalized inverse eigenvalue problems via L-BFGS-B method ⋮ Sampled limited memory methods for massive linear inverse problems ⋮ Unnamed Item ⋮ Unnamed Item ⋮ LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums ⋮ Newton-like Method with Diagonal Correction for Distributed Optimization
Uses Software
This page was built for publication: Global Convergence of Online Limited Memory BFGS