Accelerated gradient methods for nonconvex nonlinear and stochastic programming
DOI10.1007/S10107-015-0871-8zbMATH Open1335.62121arXiv1310.3787OpenAlexW1987083649MaRDI QIDQ263185FDOQ263185
Authors: Saeed Ghadimi, Guanghui Lan
Publication date: 4 April 2016
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1310.3787
Recommendations
- A note on the accelerated proximal gradient method for nonconvex optimization
- Accelerated methods for nonconvex optimization
- Gradient methods for minimizing composite functions
- Generalized uniformly optimal methods for nonlinear programming
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
Convex programming (90C25) Stochastic approximation (62L20) Analysis of algorithms and problem complexity (68Q25) Stochastic programming (90C15)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Gradient methods for minimizing composite functions
- Acceleration of Stochastic Approximation by Averaging
- A Stochastic Approximation Method
- Stochastic simulation: Algorithms and analysis
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Introduction to Stochastic Search and Optimization
- Iteration-complexity of first-order penalty methods for convex programming
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Optimization for simulation: theory vs. practice
- Online learning for matrix factorization and sparse coding
- Stochastic block mirror descent methods for nonsmooth and stochastic optimization
- A proximal method for composite minimization
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- An optimal method for stochastic composite optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
Cited In (only showing first 100 items - show all)
- An inexact Riemannian proximal gradient method
- A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- An adaptive Polyak heavy-ball method
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems
- On stochastic accelerated gradient with convergence rate of regression learning
- On stochastic accelerated gradient with convergence rate
- Title not available (Why is that?)
- A non-convex piecewise quadratic approximation of \(\ell_0\) regularization: theory and accelerated algorithm
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- An overview on recent machine learning techniques for port Hamiltonian systems
- Accelerated methods with fastly vanishing subgradients for structured non-smooth minimization
- Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms
- Adaptive FISTA for Nonconvex Optimization
- Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise
- Accelerated inexact composite gradient methods for nonconvex spectral optimization problems
- Accelerating variance-reduced stochastic gradient methods
- Random gradient extrapolation for distributed and stochastic optimization
- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- An average curvature accelerated composite gradient method for nonconvex smooth composite optimization problems
- Two stochastic optimization algorithms for convex optimization with fixed point constraints
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization
- A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems
- Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization
- Complexity analysis of a stochastic variant of generalized alternating direction method of multipliers
- Hessian barrier algorithms for linearly constrained optimization problems
- Misspecified nonconvex statistical optimization for sparse phase retrieval
- An overview of stochastic quasi-Newton methods for large-scale machine learning
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
- An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
- Behavior of accelerated gradient methods near critical points of nonconvex functions
- Understanding the acceleration phenomenon via high-resolution differential equations
- Nonlinear acceleration of momentum and primal-dual algorithms
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems
- Scheduled restart momentum for accelerated stochastic gradient descent
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- A multi-step doubly stabilized bundle method for nonsmooth convex optimization
- A smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problem
- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
- Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection
- Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers
- Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs
- Algorithms for stochastic optimization with function or expectation constraints
- Riemannian proximal gradient methods
- Extrapolated smoothing descent algorithm for constrained nonconvex and nonsmooth composite problems
- Mean curvature flow for generating discrete surfaces with piecewise constant mean curvatures
- Stochastic multilevel composition optimization algorithms with level-independent convergence rates
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Stochastic nested variance reduction for nonconvex optimization
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Finding the nearest positive-real system
- Robust and sparse regression in generalized linear model by stochastic optimization
- A penalty method for rank minimization problems in symmetric matrices
- Optimization-based calibration of simulation input models
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- On the Generalized Langevin Equation for Simulated Annealing
- Another look at the fast iterative shrinkage/thresholding algorithm (FISTA)
- Generalizing the optimized gradient method for smooth convex minimization
- Accelerated methods for nonconvex optimization
- A note on the accelerated proximal gradient method for nonconvex optimization
- Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Approximating the nearest stable discrete-time system
- Block stochastic gradient iteration for convex and nonconvex optimization
- An optimal randomized incremental gradient method
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- On stationary-point hitting time and ergodicity of stochastic gradient Langevin dynamics
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Robust accelerated gradient methods for smooth strongly convex functions
- Title not available (Why is that?)
- Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
- An accelerated directional derivative method for smooth stochastic convex optimization
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Variable smoothing for weakly convex composite functions
- Provable accelerated gradient method for nonconvex low rank optimization
- Title not available (Why is that?)
- Iteratively reweighted \(\ell _1\) algorithms with extrapolation
- Efficiency of minimizing compositions of convex functions and smooth maps
- On the convergence of mirror descent beyond stochastic convex programming
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Title not available (Why is that?)
- Proximal distance algorithms: theory and practice
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- Penalty methods with stochastic approximation for stochastic nonlinear programming
This page was built for publication: Accelerated gradient methods for nonconvex nonlinear and stochastic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q263185)