On High-order Model Regularization for Constrained Optimization
From MaRDI portal
Publication:4602340
DOI10.1137/17M1115472zbMath1387.90200OpenAlexW2774800844MaRDI QIDQ4602340
Publication date: 10 January 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/17m1115472
Related Items
On high-order model regularization for multiobjective optimization, On the complexity of solving feasibility problems with regularized models, Tensor methods for finding approximate stationary points of convex functions, Inexact basic tensor methods for some classes of convex optimization problems, Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, An adaptive regularization method in Banach spaces, Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, On constrained optimization with nonconvex regularization, Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization, Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, On the Complexity of an Inexact Restoration Method for Constrained Optimization, Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints, Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives, On large-scale unconstrained optimization and arbitrary regularization, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, An adaptive high order method for finding third-order critical points of nonconvex optimization, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity, A control-theoretic perspective on optimal high-order optimization, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, On inexact solution of auxiliary problems in tensor methods for convex optimization, An Optimal High-Order Tensor Method for Convex Optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Nonmonotone algorithm for minimization on closed sets with applications to minimization on Stiefel manifolds
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization
- Density-based globally convergent trust-region methods for self-consistent field electronic structure calculations
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Cubic regularization of Newton method and its global performance
- A UNIFIED FRAMEWORK FOR SOME INEXACT PROXIMAL POINT ALGORITHMS*
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences
- Introduction to Derivative-Free Optimization
- Tensor Methods for Unconstrained Optimization Using Second Derivatives
- Monotone Operators and the Proximal Point Algorithm
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Strict convex regularizations, proximal points and augmented lagrangians
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Practical Augmented Lagrangian Methods for Constrained Optimization
- On sequential optimality conditions for smooth constrained optimization