A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
From MaRDI portal
Publication:2149551
Abstract: In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, the proposed algorithm is tested on large-scale logistic regression and deep learning problems and it is shown that it compares favorably with other state-of-the-art methods.
Recommendations
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- A stochastic quasi-Newton method for large-scale optimization
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
- The regularized stochastic Nesterov's accelerated quasi-Newton method with applications
- Nonsmooth optimization via quasi-Newton methods
- Stochastic subgradient method for quasi-convex optimization problems
Cites work
- scientific article; zbMATH DE number 7499215 (Why is no real title available?)
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 3449561 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- scientific article; zbMATH DE number 7255141 (Why is no real title available?)
- scientific article; zbMATH DE number 6253925 (Why is no real title available?)
- A Stochastic Approximation Method
- A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation
- A fast hybrid algorithm for large-scale \(l_{1}\)-regularized logistic regression
- A globally and superlinearly convergent quasi-Newton method for general box constrained variational inequalities without smoothing approximation
- A nonsmooth version of Newton's method
- A parameterized Newton method and a quasi-Newton method for nonsmooth equations
- A proximal stochastic gradient method with progressive variance reduction
- A regularized semi-smooth Newton method with projection steps for composite convex programs
- A stochastic quasi-Newton method for large-scale optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Adaptive subgradient methods for online learning and stochastic optimization
- An SR1/BFGS SQP algorithm for nonconvex nonlinear programs with block-diagonal Hessian matrix
- An extension of Luque's growth condition
- An extragradient-based alternating direction method for convex minimization
- An improved GLMNET for L1-regularized logistic regression
- An investigation of Newton-sketch and subsampled Newton methods
- Block stochastic gradient iteration for convex and nonconvex optimization
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations
- Convex analysis and monotone operator theory in Hilbert spaces
- Deep learning: methods and applications
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Exact and inexact subsampled Newton methods for optimization
- Exact matrix completion via convex optimization
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- Extragradient method in optimization: convergence and complexity
- Finite-sum smooth optimization with SARAH
- Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone linesearch algorithms
- Forward-backward quasi-Newton methods for nonsmooth optimization problems
- Global convergence of online limited memory BFGS
- Gradient methods for minimizing composite functions
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Katyusha: the first direct acceleration of stochastic gradient methods
- LIBLINEAR: a library for large linear classification
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Minimizing finite sums with the stochastic average gradient
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Newton and Quasi-Newton Methods for a Class of Nonsmooth Equations and Related Problems
- Newton-type methods for non-convex optimization under inexact Hessian information
- Nonsmooth Equations: Motivation and Algorithms
- On superlinear convergence of quasi-Newton methods for nonsmooth equations
- On the limited memory BFGS method for large scale optimization
- On the use of stochastic Hessian information in optimization methods for machine learning
- Online learning for matrix factorization and sparse coding
- Optimization methods for large-scale machine learning
- Optimization with sparsity-inducing penalties
- Pattern recognition and machine learning.
- Probability
- Proximal Newton-type methods for minimizing composite functions
- Proximal splitting methods in signal processing
- Proximité et dualité dans un espace hilbertien
- QUIC: quadratic approximation for sparse inverse covariance estimation
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- RES: Regularized Stochastic BFGS Algorithm
- Second-order stochastic optimization for machine learning in linear time
- Signal Recovery by Proximal Forward-Backward Splitting
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic model-based minimization of weakly convex functions
- Stochastic nested variance reduction for nonconvex optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Sub-sampled Newton methods
- The elements of statistical learning. Data mining, inference, and prediction
- The subgradient extragradient method for solving variational inequalities in Hilbert space
- Trust Region Methods
- Understanding machine learning. From theory to algorithms
- Updating Quasi-Newton Matrices with Limited Storage
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
Cited in
(17)- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- The regularized stochastic Nesterov's accelerated quasi-Newton method with applications
- A quasi-Newton approach to nonsmooth convex optimization problems in machine learning
- An efficient ADAM-type algorithm with finite elements discretization technique for random elliptic optimal control problems
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- A non-monotone trust-region method with noisy oracles and additional sampling
- Riemannian Natural Gradient Methods
- Sketch-based empirical natural gradient methods for deep learning
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- A single timescale stochastic quasi-Newton method for stochastic optimization
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- A semismooth Newton stochastic proximal point algorithm with variance reduction
- A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- A nonrandom variational approach to stochastic linear quadratic Gaussian optimization involving fractional noises (FLQG)
Describes a project that uses
Uses Software
This page was built for publication: A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2149551)