Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

From MaRDI portal
Publication:526842

DOI10.1007/s10107-016-1065-8zbMath1365.90236OpenAlexW2510806995MaRDI QIDQ526842

J. L. Gardenghi, José Mario Martínez, Ernesto G. Birgin, Sandra Augusta Santos, Phillipe L. Toint

Publication date: 15 May 2017

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-016-1065-8



Related Items

Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, On high-order model regularization for multiobjective optimization, On a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations, On the complexity of solving feasibility problems with regularized models, Tensor methods for finding approximate stationary points of convex functions, Inexact basic tensor methods for some classes of convex optimization problems, A cubic regularization of Newton's method with finite difference Hessian approximations, Global convergence rate analysis of unconstrained optimization methods based on probabilistic models, Block coordinate descent for smooth nonconvex constrained minimization, An active set trust-region method for bound-constrained optimization, Accelerated Methods for NonConvex Optimization, Superfast second-order methods for unconstrained convex optimization, On the use of third-order models with fourth-order regularization for unconstrained optimization, Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation, Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization, Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems, An adaptive regularization method in Banach spaces, A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, On the worst-case evaluation complexity of non-monotone line search algorithms, Super-Universal Regularized Newton Method, Efficiency of higher-order algorithms for minimizing composite functions, Adaptive Third-Order Methods for Composite Convex Optimization, Inexact accelerated high-order proximal-point methods, On High-order Model Regularization for Constrained Optimization, Approximating Higher-Order Derivative Tensors Using Secant Updates, The evaluation complexity of finding high-order minimizers of nonconvex optimization, OFFO minimization algorithms for second-order optimality and their complexity, Lower bounds for finding stationary points I, A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Recent Theoretical Advances in Non-Convex Optimization, Lower bounds for finding stationary points II: first-order methods, Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy, On constrained optimization with nonconvex regularization, A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis, Implementable tensor methods in unconstrained convex optimization, Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities, On High-Order Multilevel Optimization Strategies, ARCq: a new adaptive regularization by cubics, Cubic regularization in symmetric rank-1 quasi-Newton methods, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality, Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity, High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms, Regional complexity analysis of algorithms for nonconvex smooth optimization, A brief survey of methods for solving nonlinear least-squares problems, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, A generalized worst-case complexity analysis for non-monotone line searches, Complexity of gradient descent for multiobjective optimization, Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models, Adaptive regularization with cubics on manifolds, New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization, A concise second-order complexity analysis for unconstrained optimization using high-order regularized models, Near-Optimal Hyperfast Second-Order Method for Convex Optimization, Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives, On large-scale unconstrained optimization and arbitrary regularization, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, An adaptive high order method for finding third-order critical points of nonconvex optimization, Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization, On global minimizers of quadratic functions with cubic regularization, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, A control-theoretic perspective on optimal high-order optimization, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, On inexact solution of auxiliary problems in tensor methods for convex optimization, Inexact High-Order Proximal-Point Methods with Auxiliary Search Procedure, The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization, An Optimal High-Order Tensor Method for Convex Optimization



Cites Work