Pages that link to "Item:Q652287"
From MaRDI portal
The following pages link to Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity (Q652287):
Displaying 50 items.
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron (Q304258) (← links)
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem (Q415364) (← links)
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization (Q429459) (← links)
- Interior-point methods for nonconvex nonlinear programming: cubic regularization (Q457205) (← links)
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization (Q494338) (← links)
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality (Q504812) (← links)
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization (Q517288) (← links)
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models (Q526842) (← links)
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results (Q535013) (← links)
- On a global complexity bound of the Levenberg-marquardt method (Q620432) (← links)
- Updating the regularization parameter in the adaptive cubic regularization algorithm (Q694543) (← links)
- Worst case complexity of direct search (Q743632) (← links)
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization (Q746819) (← links)
- Concise complexity analyses for trust region methods (Q1634776) (← links)
- A new regularized quasi-Newton algorithm for unconstrained optimization (Q1636866) (← links)
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566) (← links)
- Global complexity bound of the inexact Levenberg-Marquardt method (Q1656193) (← links)
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization (Q1675558) (← links)
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems (Q1694391) (← links)
- On the worst-case evaluation complexity of non-monotone line search algorithms (Q1694392) (← links)
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis (Q1730832) (← links)
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis (Q1734769) (← links)
- Cubic regularization in symmetric rank-1 quasi-Newton methods (Q1741108) (← links)
- A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem (Q1784887) (← links)
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization (Q1785005) (← links)
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization (Q2001208) (← links)
- A derivative-free trust-region algorithm for composite nonsmooth optimization (Q2013620) (← links)
- Regional complexity analysis of algorithms for nonconvex smooth optimization (Q2020615) (← links)
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization (Q2026717) (← links)
- A generalized worst-case complexity analysis for non-monotone line searches (Q2028039) (← links)
- Minimizing uniformly convex functions by cubic regularization of Newton method (Q2032037) (← links)
- Adaptive regularization with cubics on manifolds (Q2039233) (← links)
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization (Q2041515) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- On large-scale unconstrained optimization and arbitrary regularization (Q2070329) (← links)
- An adaptive high order method for finding third-order critical points of nonconvex optimization (Q2079692) (← links)
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization (Q2089862) (← links)
- On local nonglobal minimum of trust-region subproblem and extension (Q2093294) (← links)
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems (Q2098802) (← links)
- A cubic regularization of Newton's method with finite difference Hessian approximations (Q2138398) (← links)
- Smoothness parameter of power of Euclidean norm (Q2178876) (← links)
- On the use of third-order models with fourth-order regularization for unconstrained optimization (Q2182770) (← links)
- Newton-type methods for non-convex optimization under inexact Hessian information (Q2205970) (← links)
- Implementable tensor methods in unconstrained convex optimization (Q2227532) (← links)
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization (Q2244360) (← links)
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds (Q2288191) (← links)
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method (Q2294286) (← links)
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization (Q2301133) (← links)
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization (Q2302838) (← links)
- On global minimizers of quadratic functions with cubic regularization (Q2329649) (← links)