Accelerated gradient methods for nonconvex nonlinear and stochastic programming
From MaRDI portal
Abstract: In this paper, we generalize the well-known Nesterov's accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.
Recommendations
- A note on the accelerated proximal gradient method for nonconvex optimization
- Accelerated methods for nonconvex optimization
- Gradient methods for minimizing composite functions
- Generalized uniformly optimal methods for nonlinear programming
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Stochastic Approximation Method
- A proximal method for composite minimization
- Acceleration of Stochastic Approximation by Averaging
- An optimal method for stochastic composite optimization
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Gradient methods for minimizing composite functions
- Introduction to Stochastic Search and Optimization
- Introductory lectures on convex optimization. A basic course.
- Iteration-complexity of first-order penalty methods for convex programming
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- Online learning for matrix factorization and sparse coding
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Optimization for simulation: theory vs. practice
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Smooth minimization of non-smooth functions
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic block mirror descent methods for nonsmooth and stochastic optimization
- Stochastic simulation: Algorithms and analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(only showing first 100 items - show all)- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Inertial projected gradient method for large-scale topology optimization
- Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization
- An inexact Riemannian proximal gradient method
- A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization
- Momentum-based accelerated mirror descent stochastic approximation for robust topology optimization under stochastic loads
- Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs
- A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- An adaptive Polyak heavy-ball method
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- Algorithms for stochastic optimization with function or expectation constraints
- Riemannian proximal gradient methods
- Extrapolated smoothing descent algorithm for constrained nonconvex and nonsmooth composite problems
- An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- On stochastic accelerated gradient with convergence rate of regression learning
- Mean curvature flow for generating discrete surfaces with piecewise constant mean curvatures
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- On stochastic accelerated gradient with convergence rate
- Stochastic multilevel composition optimization algorithms with level-independent convergence rates
- Complexity analysis based on tuning the viscosity parameter of the Su-Boyd-Candès inertial gradient dynamics
- An inertial projection and contraction algorithm for pseudomonotone variational inequalities without Lipschitz continuity
- Stochastic nested variance reduction for nonconvex optimization
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- Variational representations and neural network estimation of Rényi divergences
- Global convergence of stochastic gradient Hamiltonian Monte Carlo for nonconvex stochastic optimization: nonasymptotic performance bounds and momentum-based acceleration
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- scientific article; zbMATH DE number 7306906 (Why is no real title available?)
- scientific article; zbMATH DE number 7656028 (Why is no real title available?)
- A penalty method for rank minimization problems in symmetric matrices
- Finding the nearest positive-real system
- Robust and sparse regression in generalized linear model by stochastic optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- A non-convex piecewise quadratic approximation of \(\ell_0\) regularization: theory and accelerated algorithm
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Optimization-based calibration of simulation input models
- Portfolio selection with the effect of systematic risk diversification: formulation and accelerated gradient algorithm
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- On the Generalized Langevin Equation for Simulated Annealing
- Nonlinear conjugate gradient for smooth convex functions
- An overview on recent machine learning techniques for port Hamiltonian systems
- Accelerated methods with fastly vanishing subgradients for structured non-smooth minimization
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- A fast iterative PDE-based algorithm for feedback controls of nonsmooth mean-field control problems
- Another look at the fast iterative shrinkage/thresholding algorithm (FISTA)
- Generalizing the optimized gradient method for smooth convex minimization
- Average curvature FISTA for nonconvex smooth composite optimization problems
- Accelerated methods for nonconvex optimization
- Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis
- A note on the accelerated proximal gradient method for nonconvex optimization
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- A proximal augmented Lagrangian method for linearly constrained nonconvex composite optimization problems
- Approximating the nearest stable discrete-time system
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- An optimal randomized incremental gradient method
- Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms
- Block stochastic gradient iteration for convex and nonconvex optimization
- Adaptive FISTA for Nonconvex Optimization
- A smoothing SQP framework for a class of composite L_q minimization over polyhedron
- Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- scientific article; zbMATH DE number 6719722 (Why is no real title available?)
- Complexity-optimal and parameter-free first-order methods for finding stationary points of composite optimization problems
- On stationary-point hitting time and ergodicity of stochastic gradient Langevin dynamics
- An accelerated directional derivative method for smooth stochastic convex optimization
- Robust accelerated gradient methods for smooth strongly convex functions
- Variable smoothing for weakly convex composite functions
- Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Provable accelerated gradient method for nonconvex low rank optimization
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- Accelerated inexact composite gradient methods for nonconvex spectral optimization problems
- Accelerated gradient methods combining Tikhonov regularization with geometric damping driven by the Hessian
- Iteratively reweighted \(\ell _1\) algorithms with extrapolation
- scientific article; zbMATH DE number 7370615 (Why is no real title available?)
- Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise
- Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems
- Accelerating variance-reduced stochastic gradient methods
- Efficiency of minimizing compositions of convex functions and smooth maps
- Stochastic linearized generalized alternating direction method of multipliers: expected convergence rates and large deviation properties
- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- Random gradient extrapolation for distributed and stochastic optimization
- An average curvature accelerated composite gradient method for nonconvex smooth composite optimization problems
- A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems
- Two stochastic optimization algorithms for convex optimization with fixed point constraints
- Distributed stochastic inertial-accelerated methods with delayed derivatives for nonconvex problems
- SISAL revisited
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization
- On the convergence of mirror descent beyond stochastic convex programming
- Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization
- Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients
- Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- scientific article; zbMATH DE number 7306890 (Why is no real title available?)
This page was built for publication: Accelerated gradient methods for nonconvex nonlinear and stochastic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q263185)