Pages that link to "Item:Q3083310"
From MaRDI portal
The following pages link to On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems (Q3083310):
Displayed 50 items.
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming (Q263185) (← links)
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem (Q415364) (← links)
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization'' (Q507344) (← links)
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization (Q517288) (← links)
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models (Q526842) (← links)
- On a global complexity bound of the Levenberg-marquardt method (Q620432) (← links)
- Complexity bounds for second-order optimality in unconstrained optimization (Q657654) (← links)
- Updating the regularization parameter in the adaptive cubic regularization algorithm (Q694543) (← links)
- A regularized Newton method without line search for unconstrained optimization (Q742310) (← links)
- Worst case complexity of direct search (Q743632) (← links)
- Inductive manifold learning using structured support vector machine (Q898307) (← links)
- Co-design of linear systems using generalized Benders decomposition (Q1640257) (← links)
- Global complexity bound of the inexact Levenberg-Marquardt method (Q1656193) (← links)
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization (Q1675558) (← links)
- Conditional gradient type methods for composite nonlinear and stochastic optimization (Q1717236) (← links)
- Improved optimization methods for image registration problems (Q1717565) (← links)
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis (Q1734769) (← links)
- Sub-sampled Newton methods (Q1739039) (← links)
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization (Q1785005) (← links)
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization (Q2001208) (← links)
- A Newton-like trust region method for large-scale unconstrained nonconvex minimization (Q2015579) (← links)
- Regional complexity analysis of algorithms for nonconvex smooth optimization (Q2020615) (← links)
- A generalized worst-case complexity analysis for non-monotone line searches (Q2028039) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- On large-scale unconstrained optimization and arbitrary regularization (Q2070329) (← links)
- A cubic regularization of Newton's method with finite difference Hessian approximations (Q2138398) (← links)
- An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems (Q2181594) (← links)
- On the use of third-order models with fourth-order regularization for unconstrained optimization (Q2182770) (← links)
- Newton-type methods for non-convex optimization under inexact Hessian information (Q2205970) (← links)
- Lower bounds for finding stationary points I (Q2205972) (← links)
- Lower bounds for finding stationary points II: first-order methods (Q2220663) (← links)
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization (Q2297654) (← links)
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization (Q2301133) (← links)
- Generalized uniformly optimal methods for nonlinear programming (Q2316202) (← links)
- Oracle complexity of second-order methods for smooth convex optimization (Q2330652) (← links)
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems (Q2419539) (← links)
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization (Q2419564) (← links)
- On the complexity of finding first-order critical points in constrained nonlinear optimization (Q2452373) (← links)
- On the use of iterative methods in cubic regularization for unconstrained optimization (Q2515064) (← links)
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions (Q2673524) (← links)
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models (Q2802144) (← links)
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization (Q2815548) (← links)
- Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case (Q2826817) (← links)
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques (Q2829572) (← links)
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization (Q2885470) (← links)
- Complexity of the Newton method for set-valued maps (Q2926078) (← links)
- Fault Detection Based On Online Probability Density Function Estimation (Q2960134) (← links)
- On High-order Model Regularization for Constrained Optimization (Q4602340) (← links)
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy (Q4629334) (← links)
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities (Q4634141) (← links)