A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
DOI10.1007/S10107-021-01629-YzbMATH Open1494.90061arXiv1910.09373OpenAlexW3138544340MaRDI QIDQ2149551FDOQ2149551
Authors: Yanyan Li
Publication date: 29 June 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.09373
Recommendations
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- A stochastic quasi-Newton method for large-scale optimization
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
- The regularized stochastic Nesterov's accelerated quasi-Newton method with applications
- Nonsmooth optimization via quasi-Newton methods
- Stochastic subgradient method for quasi-convex optimization problems
global convergencestochastic approximationnonsmooth stochastic optimizationstochastic higher order methodstochastic quasi-Newton scheme
Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Methods of quasi-Newton type (90C53)
Cites Work
- LIBLINEAR: a library for large linear classification
- A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation
- The elements of statistical learning. Data mining, inference, and prediction
- An SR1/BFGS SQP algorithm for nonconvex nonlinear programs with block-diagonal Hessian matrix
- Title not available (Why is that?)
- Adaptive subgradient methods for online learning and stochastic optimization
- Title not available (Why is that?)
- Pattern recognition and machine learning.
- Convex analysis and monotone operator theory in Hilbert spaces
- On the limited memory BFGS method for large scale optimization
- A stochastic quasi-Newton method for large-scale optimization
- Gradient methods for minimizing composite functions
- Updating Quasi-Newton Matrices with Limited Storage
- A Stochastic Approximation Method
- Exact matrix completion via convex optimization
- Global convergence of online limited memory BFGS
- On the use of stochastic Hessian information in optimization methods for machine learning
- RES: Regularized Stochastic BFGS Algorithm
- Understanding machine learning. From theory to algorithms
- An improved GLMNET for L1-regularized logistic regression
- Proximal splitting methods in signal processing
- Signal Recovery by Proximal Forward-Backward Splitting
- A nonsmooth version of Newton's method
- Optimization with sparsity-inducing penalties
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Trust Region Methods
- Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations
- Proximité et dualité dans un espace hilbertien
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Online learning for matrix factorization and sparse coding
- Title not available (Why is that?)
- The subgradient extragradient method for solving variational inequalities in Hilbert space
- Title not available (Why is that?)
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- An extension of Luque's growth condition
- Title not available (Why is that?)
- A proximal stochastic gradient method with progressive variance reduction
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- A fast hybrid algorithm for large-scale \(l_{1}\)-regularized logistic regression
- Newton and Quasi-Newton Methods for a Class of Nonsmooth Equations and Related Problems
- Nonsmooth Equations: Motivation and Algorithms
- Proximal Newton-type methods for minimizing composite functions
- Stochastic nested variance reduction for nonconvex optimization
- Minimizing finite sums with the stochastic average gradient
- Deep learning: methods and applications
- Error bounds, quadratic growth, and linear convergence of proximal methods
- An extragradient-based alternating direction method for convex minimization
- On superlinear convergence of quasi-Newton methods for nonsmooth equations
- A parameterized Newton method and a quasi-Newton method for nonsmooth equations
- A globally and superlinearly convergent quasi-Newton method for general box constrained variational inequalities without smoothing approximation
- Probability
- Optimization methods for large-scale machine learning
- Extragradient method in optimization: convergence and complexity
- Forward-backward quasi-Newton methods for nonsmooth optimization problems
- A regularized semi-smooth Newton method with projection steps for composite convex programs
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone linesearch algorithms
- Sub-sampled Newton methods
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Second-order stochastic optimization for machine learning in linear time
- Katyusha: the first direct acceleration of stochastic gradient methods
- An investigation of Newton-sketch and subsampled Newton methods
- Exact and inexact subsampled Newton methods for optimization
- QUIC: quadratic approximation for sparse inverse covariance estimation
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Stochastic model-based minimization of weakly convex functions
- Block stochastic gradient iteration for convex and nonconvex optimization
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Finite-sum smooth optimization with SARAH
- Title not available (Why is that?)
- Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
Cited In (17)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- The regularized stochastic Nesterov's accelerated quasi-Newton method with applications
- An efficient ADAM-type algorithm with finite elements discretization technique for random elliptic optimal control problems
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- A quasi-Newton approach to nonsmooth convex optimization problems in machine learning
- A non-monotone trust-region method with noisy oracles and additional sampling
- Riemannian Natural Gradient Methods
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Sketch-based empirical natural gradient methods for deep learning
- A single timescale stochastic quasi-Newton method for stochastic optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- A semismooth Newton stochastic proximal point algorithm with variance reduction
- A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- A nonrandom variational approach to stochastic linear quadratic Gaussian optimization involving fractional noises (FLQG)
Uses Software
This page was built for publication: A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2149551)