Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

From MaRDI portal
Revision as of 06:26, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:526842

DOI10.1007/s10107-016-1065-8zbMath1365.90236OpenAlexW2510806995MaRDI QIDQ526842

J. L. Gardenghi, José Mario Martínez, Ernesto G. Birgin, Sandra Augusta Santos, Phillipe L. Toint

Publication date: 15 May 2017

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-016-1065-8




Related Items (73)

Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracyOn high-order model regularization for multiobjective optimizationOn a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equationsOn the complexity of solving feasibility problems with regularized modelsTensor methods for finding approximate stationary points of convex functionsInexact basic tensor methods for some classes of convex optimization problemsA cubic regularization of Newton's method with finite difference Hessian approximationsGlobal convergence rate analysis of unconstrained optimization methods based on probabilistic modelsBlock coordinate descent for smooth nonconvex constrained minimizationAn active set trust-region method for bound-constrained optimizationAccelerated Methods for NonConvex OptimizationSuperfast second-order methods for unconstrained convex optimizationOn the use of third-order models with fourth-order regularization for unconstrained optimizationCubic regularization methods with second-order complexity guarantee based on a new subproblem reformulationWorst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth OptimizationConvergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problemsAn adaptive regularization method in Banach spacesA nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regressionHyperfast second-order local solvers for efficient statistically preconditioned distributed optimizationConvergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity BoundOn the worst-case evaluation complexity of non-monotone line search algorithmsSuper-Universal Regularized Newton MethodEfficiency of higher-order algorithms for minimizing composite functionsAdaptive Third-Order Methods for Composite Convex OptimizationInexact accelerated high-order proximal-point methodsOn High-order Model Regularization for Constrained OptimizationApproximating Higher-Order Derivative Tensors Using Secant UpdatesThe evaluation complexity of finding high-order minimizers of nonconvex optimizationOFFO minimization algorithms for second-order optimality and their complexityLower bounds for finding stationary points IA structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squaresA Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex OptimizationRecent Theoretical Advances in Non-Convex OptimizationLower bounds for finding stationary points II: first-order methodsUniversal Regularization Methods: Varying the Power, the Smoothness and the AccuracyOn constrained optimization with nonconvex regularizationA line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysisImplementable tensor methods in unconstrained convex optimizationComplexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian SingularitiesOn High-Order Multilevel Optimization StrategiesARCq: a new adaptive regularization by cubicsCubic regularization in symmetric rank-1 quasi-Newton methodsOn Regularization and Active-set Methods with Complexity for Constrained OptimizationNonlinear stepsize control algorithms: complexity bounds for first- and second-order optimalityComplexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraintsOptimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimizationAn algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexityHigh-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity termsRegional complexity analysis of algorithms for nonconvex smooth optimizationA brief survey of methods for solving nonlinear least-squares problemsSecond-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimizationA generalized worst-case complexity analysis for non-monotone line searchesComplexity of gradient descent for multiobjective optimizationEvaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order ModelsAdaptive regularization with cubics on manifoldsNew subspace minimization conjugate gradient methods based on regularization model for unconstrained optimizationA concise second-order complexity analysis for unconstrained optimization using high-order regularized modelsNear-Optimal Hyperfast Second-Order Method for Convex OptimizationSharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive ConstraintsAdaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivativesTensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order DerivativesOn large-scale unconstrained optimization and arbitrary regularizationIteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexactAn adaptive high order method for finding third-order critical points of nonconvex optimizationAdaptive Regularization Algorithms with Inexact Evaluations for Nonconvex OptimizationOn global minimizers of quadratic functions with cubic regularizationOptimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundaryA control-theoretic perspective on optimal high-order optimizationOn complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimizationOn inexact solution of auxiliary problems in tensor methods for convex optimizationInexact High-Order Proximal-Point Methods with Auxiliary Search ProcedureThe Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained OptimizationAn Optimal High-Order Tensor Method for Convex Optimization



Cites Work




This page was built for publication: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models