Implementable tensor methods in unconstrained convex optimization
From MaRDI portal
Publication:2227532
DOI10.1007/s10107-019-01449-1zbMath1459.90157OpenAlexW2808862920MaRDI QIDQ2227532
Publication date: 15 February 2021
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-019-01449-1
convex optimizationhigh-order methodstensor methodslower complexity boundsworst-case complexity bounds
Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06)
Related Items
Tensor methods for finding approximate stationary points of convex functions, Inexact basic tensor methods for some classes of convex optimization problems, Local convergence of tensor methods, Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods, Optimal complexity and certification of Bregman first-order methods, Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle, Accelerated meta-algorithm for convex optimization problems, Superfast second-order methods for unconstrained convex optimization, Smoothness parameter of power of Euclidean norm, Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation, Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence, Super-Universal Regularized Newton Method, Efficiency of higher-order algorithms for minimizing composite functions, Smooth monotone stochastic variational inequalities and saddle point problems: a survey, First-order methods for convex optimization, Adaptive Third-Order Methods for Composite Convex Optimization, Inexact accelerated high-order proximal-point methods, Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Contracting Proximal Methods for Smooth Convex Optimization, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives, An adaptive high order method for finding third-order critical points of nonconvex optimization, Unified Acceleration of High-Order Algorithms under General Hölder Continuity, Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses, A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima, Dual Space Preconditioning for Gradient Descent, Inexact High-Order Proximal-Point Methods with Auxiliary Search Procedure, Inexact model: a framework for optimization and variational inequalities, Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities, High-Order Optimization Methods for Fully Composite Problems, An Optimal High-Order Tensor Method for Convex Optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Gradient methods for minimizing composite functions
- Universal gradient methods for convex optimization problems
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Accelerating the cubic regularization of Newton's method on convex problems
- Higher order necessary conditions in abstract mathematical programming
- Introductory lectures on convex optimization. A basic course.
- Lower bounds for finding stationary points I
- Oracle complexity of second-order methods for smooth convex optimization
- Cubic regularization of Newton method and its global performance
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Evaluating Derivatives
- On large-scale unconstrained optimization problems and higher order methods
- Tensor Methods for Unconstrained Optimization Using Second Derivatives
- Trust Region Methods
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications