Accelerated gradient methods for nonconvex nonlinear and stochastic programming
From MaRDI portal
Publication:263185
DOI10.1007/S10107-015-0871-8zbMATH Open1335.62121arXiv1310.3787OpenAlexW1987083649MaRDI QIDQ263185FDOQ263185
Publication date: 4 April 2016
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Abstract: In this paper, we generalize the well-known Nesterov's accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.
Full work available at URL: https://arxiv.org/abs/1310.3787
Recommendations
- A note on the accelerated proximal gradient method for nonconvex optimization
- Accelerated methods for nonconvex optimization
- Gradient methods for minimizing composite functions
- Generalized uniformly optimal methods for nonlinear programming
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
Convex programming (90C25) Stochastic approximation (62L20) Analysis of algorithms and problem complexity (68Q25) Stochastic programming (90C15)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Gradient methods for minimizing composite functions
- Acceleration of Stochastic Approximation by Averaging
- A Stochastic Approximation Method
- Stochastic simulation: Algorithms and analysis
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Introduction to Stochastic Search and Optimization
- Iteration-complexity of first-order penalty methods for convex programming
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Optimization for simulation: theory vs. practice
- Online learning for matrix factorization and sparse coding
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
- A proximal method for composite minimization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- An optimal method for stochastic composite optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
Cited In (only showing first 100 items - show all)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems
- Algorithms for stochastic optimization with function or expectation constraints
- Riemannian proximal gradient methods
- Extrapolated smoothing descent algorithm for constrained nonconvex and nonsmooth composite problems
- Mean curvature flow for generating discrete surfaces with piecewise constant mean curvatures
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Robust and sparse regression in generalized linear model by stochastic optimization
- A penalty method for rank minimization problems in symmetric matrices
- Proximal Distance Algorithms: Theory and Examples
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- A Projected Gradient and Constraint Linearization Method for Nonlinear Model Predictive Control
- On the Generalized Langevin Equation for Simulated Annealing
- On the Convergence of Mirror Descent beyond Stochastic Convex Programming
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Approximating the nearest stable discrete-time system
- An optimal randomized incremental gradient method
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Title not available (Why is that?)
- An accelerated directional derivative method for smooth stochastic convex optimization
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Variable smoothing for weakly convex composite functions
- Provable accelerated gradient method for nonconvex low rank optimization
- Title not available (Why is that?)
- Iteratively reweighted \(\ell _1\) algorithms with extrapolation
- Accelerating variance-reduced stochastic gradient methods
- Efficiency of minimizing compositions of convex functions and smooth maps
- Generalized Momentum-Based Methods: A Hamiltonian Perspective
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Title not available (Why is that?)
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- Penalty methods with stochastic approximation for stochastic nonlinear programming
- Probability maximization via Minkowski functionals: convex representations and tractable resolution
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization
- Nonconvex optimization with inertial proximal stochastic variance reduction gradient
- Stochastic optimization using a trust-region method and random models
- Complexity of a Quadratic Penalty Accelerated Inexact Proximal Point Method for Solving Linearly Constrained Nonconvex Composite Programs
- A hybrid stochastic optimization framework for composite nonconvex optimization
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates
- On computing the distance to stability for matrices using linear dissipative Hamiltonian systems
- Accelerated schemes for a class of variational inequalities
- Finding the Nearest Positive-Real System
- Title not available (Why is that?)
- Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Generalized uniformly optimal methods for nonlinear programming
- Stochastic perturbation of reduced gradient \& GRG methods for nonconvex programming problems
- Generalizing the Optimized Gradient Method for Smooth Convex Minimization
- Stochastic heavy ball
- Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)
- Accelerated Methods for NonConvex Optimization
- Recent Theoretical Advances in Non-Convex Optimization
- Optimization-Based Calibration of Simulation Input Models
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- Convex Synthesis of Accelerated Gradient Algorithms
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Inertial projected gradient method for large-scale topology optimization
- Primal–dual accelerated gradient methods with small-dimensional relaxation oracle
- Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Title not available (Why is that?)
- Momentum-based accelerated mirror descent stochastic approximation for robust topology optimization under stochastic loads
- An inexact Riemannian proximal gradient method
- A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization
- Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization
- An adaptive Polyak heavy-ball method
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- Complexity analysis based on tuning the viscosity parameter of the Su-Boyd-Candès inertial gradient dynamics
- An inertial projection and contraction algorithm for pseudomonotone variational inequalities without Lipschitz continuity
- On stochastic accelerated gradient with convergence rate
- An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- A non-convex piecewise quadratic approximation of \(\ell_0\) regularization: theory and accelerated algorithm
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Hessian Barrier Algorithms for Linearly Constrained Optimization Problems
- Portfolio selection with the effect of systematic risk diversification: formulation and accelerated gradient algorithm
- Distributed Stochastic Inertial-Accelerated Methods with Delayed Derivatives for Nonconvex Problems
- SISAL Revisited
- A fast iterative PDE-based algorithm for feedback controls of nonsmooth mean-field control problems
- An overview on recent machine learning techniques for port Hamiltonian systems
- Accelerated methods with fastly vanishing subgradients for structured non-smooth minimization
- Average curvature FISTA for nonconvex smooth composite optimization problems
This page was built for publication: Accelerated gradient methods for nonconvex nonlinear and stochastic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q263185)