Pages that link to "Item:Q652287"
From MaRDI portal
The following pages link to Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity (Q652287):
Displaying 50 items.
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary (Q2330649) (← links)
- Algebraic rules for quadratic regularization of Newton's method (Q2340522) (← links)
- Recent advances in trust region algorithms (Q2349124) (← links)
- An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints (Q2357423) (← links)
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems (Q2419539) (← links)
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization (Q2419564) (← links)
- On the complexity of finding first-order critical points in constrained nonlinear optimization (Q2452373) (← links)
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization (Q2515043) (← links)
- On the use of iterative methods in cubic regularization for unconstrained optimization (Q2515064) (← links)
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation (Q2676160) (← links)
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization (Q2679570) (← links)
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points (Q2696568) (← links)
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares (Q2696927) (← links)
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization (Q2696932) (← links)
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models (Q2802144) (← links)
- A second-order globally convergent direct-search method and its worst-case complexity (Q2810113) (← links)
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization (Q2815548) (← links)
- Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case (Q2826817) (← links)
- Global complexity bound of the Levenberg–Marquardt method (Q2829563) (← links)
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques (Q2829572) (← links)
- Cubic overestimation and secant updating for unconstrained optimization of<i>C</i><sup>2, 1</sup>functions (Q2926071) (← links)
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients (Q4594856) (← links)
- On High-order Model Regularization for Constrained Optimization (Q4602340) (← links)
- A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points (Q4620423) (← links)
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy (Q4629334) (← links)
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities (Q4634141) (← links)
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition (Q4634142) (← links)
- <tt>trlib</tt>: a vector-free implementation of the GLTR method for iterative solution of the trust region problem (Q4637824) (← links)
- ARC<sub>q</sub>: a new adaptive regularization by cubics (Q4638924) (← links)
- On Regularization and Active-set Methods with Complexity for Constrained Optimization (Q4641664) (← links)
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization (Q4641667) (← links)
- Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization (Q4641670) (← links)
- Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions (Q4646444) (← links)
- An improvement of adaptive cubic regularization method for unconstrained optimization problems (Q5031220) (← links)
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy (Q5034938) (← links)
- Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods (Q5038173) (← links)
- On high-order model regularization for multiobjective optimization (Q5038176) (← links)
- On the complexity of solving feasibility problems with regularized models (Q5038424) (← links)
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization (Q5085238) (← links)
- First-Order Methods for Nonconvex Quadratic Minimization (Q5113167) (← links)
- A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization (Q5124006) (← links)
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization (Q5131958) (← links)
- Second-Order Guarantees of Distributed Gradient Algorithms (Q5131964) (← links)
- On High-Order Multilevel Optimization Strategies (Q5147030) (← links)
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem (Q5148399) (← links)
- Convergence of Newton-MR under Inexact Hessian Information (Q5148404) (← links)
- Smoothing quadratic regularization method for hemivariational inequalities (Q5151500) (← links)
- On the Complexity of an Inexact Restoration Method for Constrained Optimization (Q5210514) (← links)
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models (Q5210739) (← links)
- (Q5214226) (← links)