Pages that link to "Item:Q526842"
From MaRDI portal
The following pages link to Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models (Q526842):
Displaying 50 items.
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality (Q504812) (← links)
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566) (← links)
- On the worst-case evaluation complexity of non-monotone line search algorithms (Q1694392) (← links)
- A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares (Q1715713) (← links)
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis (Q1730832) (← links)
- Cubic regularization in symmetric rank-1 quasi-Newton methods (Q1741108) (← links)
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization (Q1785005) (← links)
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints (Q1995981) (← links)
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization (Q2001208) (← links)
- An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity (Q2020598) (← links)
- High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms (Q2020600) (← links)
- Regional complexity analysis of algorithms for nonconvex smooth optimization (Q2020615) (← links)
- A generalized worst-case complexity analysis for non-monotone line searches (Q2028039) (← links)
- Adaptive regularization with cubics on manifolds (Q2039233) (← links)
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization (Q2041515) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- On large-scale unconstrained optimization and arbitrary regularization (Q2070329) (← links)
- An adaptive high order method for finding third-order critical points of nonconvex optimization (Q2079692) (← links)
- A control-theoretic perspective on optimal high-order optimization (Q2089793) (← links)
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization (Q2089862) (← links)
- A cubic regularization of Newton's method with finite difference Hessian approximations (Q2138398) (← links)
- Block coordinate descent for smooth nonconvex constrained minimization (Q2162523) (← links)
- An active set trust-region method for bound-constrained optimization (Q2169274) (← links)
- On the use of third-order models with fourth-order regularization for unconstrained optimization (Q2182770) (← links)
- Lower bounds for finding stationary points I (Q2205972) (← links)
- Lower bounds for finding stationary points II: first-order methods (Q2220663) (← links)
- On constrained optimization with nonconvex regularization (Q2225521) (← links)
- Implementable tensor methods in unconstrained convex optimization (Q2227532) (← links)
- A brief survey of methods for solving nonlinear least-squares problems (Q2273095) (← links)
- On global minimizers of quadratic functions with cubic regularization (Q2329649) (← links)
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary (Q2330649) (← links)
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems (Q2419539) (← links)
- Superfast second-order methods for unconstrained convex optimization (Q2664892) (← links)
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation (Q2676160) (← links)
- Inexact accelerated high-order proximal-point methods (Q2689812) (← links)
- OFFO minimization algorithms for second-order optimality and their complexity (Q2696918) (← links)
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models (Q2802144) (← links)
- Accelerated Methods for NonConvex Optimization (Q4571877) (← links)
- On High-order Model Regularization for Constrained Optimization (Q4602340) (← links)
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy (Q4629334) (← links)
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities (Q4634141) (← links)
- ARC<sub>q</sub>: a new adaptive regularization by cubics (Q4638924) (← links)
- On Regularization and Active-set Methods with Complexity for Constrained Optimization (Q4641664) (← links)
- Near-Optimal Hyperfast Second-Order Method for Convex Optimization (Q4965110) (← links)
- Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives (Q4971023) (← links)
- Inexact High-Order Proximal-Point Methods with Auxiliary Search Procedure (Q5013580) (← links)
- An Optimal High-Order Tensor Method for Convex Optimization (Q5026443) (← links)
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy (Q5034938) (← links)
- On high-order model regularization for multiobjective optimization (Q5038176) (← links)
- On a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations (Q5038185) (← links)