On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
From MaRDI portal
Publication:2082285
DOI10.1007/s11425-020-1865-1zbMath1497.65098OpenAlexW4220920445MaRDI QIDQ2082285
Michael Ulbrich, Andre Milzarek, Xiantao Xiao, ZaiWen Wen
Publication date: 4 October 2022
Published in: Science China. Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11425-020-1865-1
stochastic approximationlocal convergencesemismooth Newton methodnonsmooth stochastic optimizationstochastic second-order information
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- User-friendly tail bounds for sums of random matrices
- Sample size selection in optimization methods for machine learning
- Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data
- A regularized semi-smooth Newton method with projection steps for composite convex programs
- Sub-sampled Newton methods
- Newton-type methods for non-convex optimization under inexact Hessian information
- A nonsmooth version of Newton's method
- Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization
- Stochastic simulation: Algorithms and analysis
- Feature Article: Optimization for simulation: Theory vs. Practice
- Optimization with Sparsity-Inducing Penalties
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Nonsmooth Equations: Motivation and Algorithms
- Deep Learning: Methods and Applications
- Computing a Trust Region Step
- Probability with Martingales
- Optimization Methods for Large-Scale Machine Learning
- Parallel Stochastic Newton Method
- Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- A Semismooth Newton Method with Multidimensional Filter Globalization for $l_1$-Optimization
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Signal Recovery by Proximal Forward-Backward Splitting
- Understanding Machine Learning
- Proximité et dualité dans un espace hilbertien
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- A Stochastic Approximation Method
- Exact and inexact subsampled Newton methods for optimization
- A basic course in probability theory
- Convex analysis and monotone operator theory in Hilbert spaces
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization