Oracle complexity of second-order methods for smooth convex optimization
From MaRDI portal
Publication:2330652
Abstract: Second-order methods, which utilize gradients as well as Hessians to optimize a given function, are of major importance in mathematical optimization. In this work, we prove tight bounds on the oracle complexity of such methods for smooth convex functions, or equivalently, the worst-case number of iterations required to optimize such functions to a given accuracy. In particular, these bounds indicate when such methods can or cannot improve on gradient-based methods, whose oracle complexity is much better understood. We also provide generalizations of our results to higher-order methods.
Recommendations
- An optimal gradient method for smooth strongly convex minimization
- Superfast second-order methods for unconstrained convex optimization
- Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization
- On the oracle complexity of smooth strongly convex minimization
- The exact information-based complexity of smooth convex minimization
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 729680 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 3052543 (Why is no real title available?)
- Accelerating the cubic regularization of Newton's method on convex problems
- An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods
- Cubic regularization of Newton method and its global performance
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Introductory lectures on convex optimization. A basic course.
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On uniformly convex functionals
- Successive Rank-One Approximations for Nearly Orthogonally Decomposable Symmetric Tensors
Cited in
(33)- A search-free \(O(1/k^{3/2})\) homotopy inexact proximal-Newton extragradient algorithm for monotone variational inequalities
- Tensor methods for minimizing convex functions with Hölder continuous higher-order derivatives
- Unified acceleration of high-order algorithms under general Hölder continuity
- OFFO minimization algorithms for second-order optimality and their complexity
- On the oracle complexity of smooth strongly convex minimization
- Superfast second-order methods for unconstrained convex optimization
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Affine-invariant contracting-point methods for convex optimization
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Higher-order methods for convex-concave min-max optimization and monotone variational inequalities
- A control-theoretic perspective on optimal high-order optimization
- Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods
- Inexact tensor methods and their application to stochastic convex optimization
- On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization
- Lower bounds for finding stationary points II: first-order methods
- An accelerated regularized Chebyshev-Halley method for unconstrained optimization
- Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods
- Super-Universal Regularized Newton Method
- Contracting proximal methods for smooth convex optimization
- Implementable tensor methods in unconstrained convex optimization
- High-order optimization methods for fully composite problems
- Lower bounds for finding stationary points I
- Complexity bounds for second-order optimality in unconstrained optimization
- High-order methods beyond the classical complexity bounds: inexact high-order proximal-point methods
- scientific article; zbMATH DE number 7626728 (Why is no real title available?)
- On lower iteration complexity bounds for the convex concave saddle point problems
- Second-order stochastic optimization for machine learning in linear time
- An optimal high-order tensor method for convex optimization
- Inexact high-order proximal-point methods with auxiliary search procedure
- A simple method for convex optimization in the oracle model
- SketchySGD: reliable stochastic optimization via randomized curvature estimates
- Perseus: a simple and optimal high-order method for variational inequalities
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
This page was built for publication: Oracle complexity of second-order methods for smooth convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2330652)