Universal gradient methods for convex optimization problems
DOI10.1007/S10107-014-0790-0zbMATH Open1327.90216OpenAlexW1985240368MaRDI QIDQ494332FDOQ494332
Authors: Yong-Cai Geng, Sumit K. Garg
Publication date: 31 August 2015
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: http://uclouvain.be/cps/ucl/doc/core/documents/coredp2013_26web.pdf
Recommendations
Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Minimax problems in mathematical programming (90C47)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Gradient methods for minimizing composite functions
- Primal-dual subgradient methods for convex problems
- Title not available (Why is that?)
- First-order methods of smooth convex optimization with inexact oracle
- New variants of bundle methods
- Optimal methods of smooth convex minimization
- Vector approximation problems in geometric vector optimization
Cited In (98)
- A family of subgradient-based methods for convex optimization problems in a unifying framework
- A universal modification of the linear coupling method
- Optimal subgradient methods: computational properties for large-scale linear inverse problems
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Dynamic smoothness parameter for fast gradient methods
- Inexact model: a framework for optimization and variational inequalities
- Linear coupling: an ultimate unification of gradient and mirror descent
- Quasi-convex feasibility problems: subgradient methods and convergence rates
- First-order methods for convex optimization
- On the properties of the method of minimization for convex functions with relaxation on the distance to extremum
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Zeroth-order methods for noisy Hölder-gradient functions
- Universal intermediate gradient method for convex problems with inexact oracle
- Essentials of numerical nonsmooth optimization
- On the computational efficiency of subgradient methods: a case study with Lagrangian bounds
- Convex optimization with inexact gradients in Hilbert space and applications to elliptic inverse problems
- Accelerated meta-algorithm for convex optimization problems
- Stochastic model-based minimization of weakly convex functions
- Efficiency of the accelerated coordinate descent method on structured optimization problems
- Sharpness, restart, and acceleration
- OSGA: a fast subgradient algorithm with optimal complexity
- Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
- An accelerated directional derivative method for smooth stochastic convex optimization
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- Smoothness parameter of power of Euclidean norm
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Efficient first-order methods for convex minimization: a constructive approach
- Uniform rank gradient, cost, and local-global convergence
- Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
- Efficiency of minimizing compositions of convex functions and smooth maps
- The approximate duality gap technique: a unified theory of first-order methods
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
- A block inertial Bregman proximal algorithm for nonsmooth nonconvex problems with application to symmetric nonnegative matrix tri-factorization
- Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle
- Implementable tensor methods in unconstrained convex optimization
- Universal method for stochastic composite optimization problems
- First-order optimization algorithms via inertial systems with Hessian driven damping
- A simple nearly optimal restart scheme for speeding up first-order methods
- Universal method of searching for equilibria and stochastic equilibria in transportation networks
- An adaptive proximal method for variational inequalities
- On the adaptivity of stochastic gradient-based optimization
- A dual approach for optimal algorithms in distributed optimization over networks
- Optimal subgradient algorithms for large-scale convex optimization in simple domains
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\)
- Accelerated schemes for a class of variational inequalities
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints
- Complexity bounds for primal-dual methods minimizing the model of objective function
- An optimal gradient method for smooth strongly convex minimization
- Fast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested point
- Generalized uniformly optimal methods for nonlinear programming
- An introduction to continuous optimization for imaging
- Perturbed Fenchel duality and first-order methods
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
- Regularized Newton methods for minimizing functions with Hölder continuous hessians
- An optimal subgradient algorithm with subspace search for costly convex optimization problems
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- Unified acceleration of high-order algorithms under general Hölder continuity
- Fast gradient methods for uniformly convex and weakly smooth problems
- On the quality of first-order approximation of functions with Hölder continuous gradient
- Regularized nonlinear acceleration
- Recent advances in structural optimization
- Accelerated first-order methods for hyperbolic programming
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Optimal Algorithms for Stochastic Complementary Composite Minimization
- The backtrack Hölder gradient method with application to min-max and min-min problems
- Title not available (Why is that?)
- Multistage transportation model and sufficient conditions for its potentiality
- Title not available (Why is that?)
- Optimal Affine-Invariant Smooth Minimization Algorithms
- Gradient descent in the absence of global Lipschitz continuity of the gradients
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Title not available (Why is that?)
- Complexity-optimal and parameter-free first-order methods for finding stationary points of composite optimization problems
- Cyclic Coordinate Dual Averaging with Extrapolation
- A universal accelerated primal-dual method for convex optimization problems
- The method of codifferential descent for convex and global piecewise affine optimization
- On optimal universal first-order methods for minimizing heterogeneous sums
- Universal Conditional Gradient Sliding for Convex Optimization
- Generalized conditional gradient with augmented Lagrangian for composite minimization
- Complementary composite minimization, small gradients in general norms, and applications
- High-order methods beyond the classical complexity bounds: inexact high-order proximal-point methods
- General Hölder smooth convergence rates follow from specialized rates assuming growth bounds
- An adaptive analog of Nesterov's method for variational inequalities with a strongly monotone operator
- Accelerated extra-gradient descent: a novel accelerated first-order method
- Gradient methods with memory
- A subgradient method for free material design
- Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter
- Inexact reduced gradient methods in nonconvex optimization
- Non-monotone Behavior of the Heavy Ball Method
- High-probability complexity bounds for non-smooth stochastic convex optimization with heavy-tailed noise
- Universal nonmonotone line search method for nonconvex multiobjective optimization problems with convex constraints
- Some adaptive first-order methods for variational inequalities with relatively strongly monotone operators and generalized smoothness
- The impact of noise on evaluation complexity: the deterministic trust-region case
- Empirical risk minimization: probabilistic complexity and stepsize strategy
- Radial duality. II: Applications and algorithms
- Essentials of numerical nonsmooth optimization
Uses Software
This page was built for publication: Universal gradient methods for convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q494332)