Inexact proximal stochastic second-order methods for nonconvex composite optimization
DOI10.1080/10556788.2020.1713128zbMath1454.90066OpenAlexW3000160952WikidataQ126343631 ScholiaQ126343631MaRDI QIDQ5135256
Publication date: 19 November 2020
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2020.1713128
complexityvariance reductionnonconvexsecond-order approximationstochastic gradientinexact subproblem solution(weakly) smooth functionproximal Polyak-Łojasiewicz (PL) inequality
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26)
Related Items (3)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- An inexact successive quadratic approximation method for L-1 regularized optimization
- A family of second-order methods for convex \(\ell _1\)-regularized optimization
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Inexact proximal stochastic gradient method for convex composite optimization
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
- Complexity bounds for primal-dual methods minimizing the model of objective function
- Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
- Generalized uniformly optimal methods for nonlinear programming
- Pathwise coordinate optimization
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Optimal methods of smooth convex minimization
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- IMRO: A Proximal Quasi-Newton Method for Solving $\ell_1$-Regularized Least Squares Problems
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Inexact proximal stochastic second-order methods for nonconvex composite optimization