Stochastic proximal quasi-Newton methods for non-convex composite optimization
DOI10.1080/10556788.2018.1471141OpenAlexW2804009132WikidataQ129821591 ScholiaQ129821591MaRDI QIDQ5198046
Xiao Wang, Ya-Xiang Yuan, Xiao-yu Wang
Publication date: 2 October 2019
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2018.1471141
complexity boundstochastic gradientsymmetric rank one methodnon-convex composite optimizationPolyak-Łojasiewicz (PL) inequalityrank one proximity operatorstochastic variance reduction gradient
Numerical optimization and variational techniques (65K10) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Related Items (13)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- An inexact successive quadratic approximation method for L-1 regularized optimization
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- Minimizing finite sums with the stochastic average gradient
- A minimization method for the sum of a convex function and a continuously differentiable function
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Updating the self-scaling symmetric rank one algorithm with limited memory for large-scale unconstrained optimization
- Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
- Pathwise coordinate optimization
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Algorithms for nonlinear constraints that use lagrangian functions
- RES: Regularized Stochastic BFGS Algorithm
- A Theoretical and Experimental Study of the Symmetric Rank-One Update
- Analysis of a Symmetric Rank-One Trust Region Method
- A new approach to symmetric rank-one updating
- An investigation of Newton-Sketch and subsampled Newton methods
- Tackling Box-Constrained Optimization via a New Projected Quasi-Newton Approach
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- IMRO: A Proximal Quasi-Newton Method for Solving $\ell_1$-Regularized Least Squares Problems
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- A Stochastic Approximation Method
- A modified rank one update which converges \(Q\)-superlinearly
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Stochastic proximal quasi-Newton methods for non-convex composite optimization