Cubic regularization of Newton method and its global performance

From MaRDI portal
Publication:2494519

DOI10.1007/s10107-006-0706-8zbMath1142.90500OpenAlexW2009941369MaRDI QIDQ2494519

Yu. E. Nesterov, Boris T. Polyak

Publication date: 28 June 2006

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-006-0706-8



Related Items

Local convergence of tensor methods, A cubic regularization of Newton's method with finite difference Hessian approximations, Optimization landscape of Tucker decomposition, On the optimization landscape of tensor decompositions, Cubic regularized Newton method for the saddle point models: a global and local convergence analysis, Global convergence rate analysis of unconstrained optimization methods based on probabilistic models, Practical inexact proximal quasi-Newton method with global complexity analysis, A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions, An active set trust-region method for bound-constrained optimization, A recursive multilevel trust region method with application to fully monolithic phase-field models of brittle fracture, Superfast second-order methods for unconstrained convex optimization, Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization, Smoothness parameter of power of Euclidean norm, Improvements of the Newton-Raphson method, An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems, Toward nonquadratic S-lemma: new theory and application in nonconvex optimization, On the use of third-order models with fourth-order regularization for unconstrained optimization, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation, Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, On a global complexity bound of the Levenberg-marquardt method, Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization, Reachability of optimal convergence rate estimates for high-order numerical convex optimization methods, Finding second-order stationary points in constrained minimization: a feasible direction approach, Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval, Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers, First-order methods almost always avoid strict saddle points, Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem, On the use of the energy norm in trust-region and adaptive cubic regularization subproblems, On the worst-case evaluation complexity of non-monotone line search algorithms, Conjugate direction methods and polarity for quadratic hypersurfaces, Nonlinear acceleration of momentum and primal-dual algorithms, A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization, Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming, Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points, Newton-type methods for non-convex optimization under inexact Hessian information, Lower bounds for finding stationary points I, Interior-point methods for nonconvex nonlinear programming: cubic regularization, Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity, Lower bounds for finding stationary points II: first-order methods, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, Complexity bounds for second-order optimality in unconstrained optimization, On the complexity of finding first-order critical points in constrained nonlinear optimization, A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis, Implementable tensor methods in unconstrained convex optimization, On the behavior of Lagrange multipliers in convex and nonconvex infeasible interior point methods, Alternating minimization methods for strongly convex optimization, Sub-sampled Newton methods, Cubic regularization in symmetric rank-1 quasi-Newton methods, Newton's iterates can converge to non-stationary points, Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality, The global optimization geometry of shallow linear neural networks, A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization, Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, Updating the regularization parameter in the adaptive cubic regularization algorithm, Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results, On solving trust-region and other regularised subproblems in optimization, A Newton-like trust region method for large-scale unconstrained nonconvex minimization, Regional complexity analysis of algorithms for nonconvex smooth optimization, Worst-case complexity bounds of directional direct-search methods for multiobjective optimization, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, A geometric analysis of phase retrieval, Complexity bounds for primal-dual methods minimizing the model of objective function, A generalized worst-case complexity analysis for non-monotone line searches, Decentralized optimization over tree graphs, Minimizing uniformly convex functions by cubic regularization of Newton method, Accelerating the cubic regularization of Newton's method on convex problems, Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization, On the use of iterative methods in cubic regularization for unconstrained optimization, Adaptive regularization with cubics on manifolds, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization, A note on inexact gradient and Hessian conditions for cubic regularized Newton's method, An accelerated first-order method with complexity analysis for solving cubic regularization subproblems, A regularized Newton method without line search for unconstrained optimization, A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization, Worst case complexity of direct search, Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization, Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization, Trust-region and other regularisations of linear least-squares problems, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, Regularized Newton method for unconstrained convex optimization, Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization, A two-step improved Newton method to solve convex unconstrained optimization problems, On large-scale unconstrained optimization and arbitrary regularization, An adaptive high order method for finding third-order critical points of nonconvex optimization, On global minimizers of quadratic functions with cubic regularization, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Generalized self-concordant functions: a recipe for Newton-type methods, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, Oracle complexity of second-order methods for smooth convex optimization, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, On local nonglobal minimum of trust-region subproblem and extension, Algebraic rules for quadratic regularization of Newton's method, Recent advances in trust region algorithms, \(\rho\)-regularization subproblems: strong duality and an eigensolver-based algorithm, A hybrid stochastic optimization framework for composite nonconvex optimization, An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints, An improvement of adaptive cubic regularization method for unconstrained optimization problems, Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization, Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods, On high-order model regularization for multiobjective optimization, On the complexity of solving feasibility problems with regularized models, Tensor methods for finding approximate stationary points of convex functions, Sharp Restricted Isometry Bounds for the Inexistence of Spurious Local Minima in Nonconvex Matrix Recovery, Convergence guarantees for a class of non-convex and non-smooth optimization problems, Inexact basic tensor methods for some classes of convex optimization problems, A Hybrid Proximal Extragradient Self-Concordant Primal Barrier Method for Monotone Variational Inequalities, A note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock function, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Unnamed Item, Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms, Optimisation and asymptotic stability, Cubic overestimation and secant updating for unconstrained optimization ofC2, 1functions, Complexity of the Newton method for set-valued maps, A Metric Space for Point Process Excitations, A New Homotopy Proximal Variable-Metric Framework for Composite Convex Minimization, Unnamed Item, Unnamed Item, Unnamed Item, Accelerated Methods for NonConvex Optimization, A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization, Sketched Newton--Raphson, Escaping Strict Saddle Points of the Moreau Envelope in Nonsmooth Optimization, Unnamed Item, Unnamed Item, First-Order Methods for Nonconvex Quadratic Minimization, On High-order Model Regularization for Constrained Optimization, OFFO minimization algorithms for second-order optimality and their complexity, Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization, A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem, Relaxing Kink Qualifications and Proving Convergence Rates in Piecewise Smooth Optimization, A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Second-Order Guarantees of Distributed Gradient Algorithms, Global Convergence of Policy Gradient Methods to (Almost) Locally Optimal Policies, Contracting Proximal Methods for Smooth Convex Optimization, Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy, Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case, Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities, On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition, The Kurdyka–Łojasiewicz Inequality as Regularity Condition, On High-Order Multilevel Optimization Strategies, Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem, Convergence of Newton-MR under Inexact Hessian Information, The solution of euclidean norm trust region SQP subproblems via second-order cone programs: an overview and elementary introduction, ARCq: a new adaptive regularization by cubics, A proximal-Newton method for unconstrained convex optimization in Hilbert spaces, Smoothing quadratic regularization method for hemivariational inequalities, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization, Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization, Unnamed Item, Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions, Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity, Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence, Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians, Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Modified Gauss–Newton scheme with worst case guarantees for global performance, Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models, Primal–dual exterior point method for convex optimization, On the Complexity of an Inexact Restoration Method for Constrained Optimization, A concise second-order complexity analysis for unconstrained optimization using high-order regularized models, ADMM for multiaffine constrained optimization, Error estimates for iterative algorithms for minimizing regularized quadratic subproblems, Near-Optimal Hyperfast Second-Order Method for Convex Optimization, Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints, On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization, Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives, Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case, Evolvability of Real Functions, A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques, Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints, Iteration-complexity of a Rockafellar's proximal method of multipliers for convex programming based on second-order approximations, Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions, An adaptive nonmonotone trust region method based on a modified scalar approximation of the Hessian in the successive quadratic subproblems, Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization, Unnamed Item, Unnamed Item, Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity, Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization, Unified Acceleration of High-Order Algorithms under General Hölder Continuity, Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization, On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods, Error Bounds and Singularity Degree in Semidefinite Programming, Linear-Time Convexity Test for Low-Order Piecewise Polynomials, Unnamed Item, Unnamed Item, On inexact solution of auxiliary problems in tensor methods for convex optimization, Inexact SARAH algorithm for stochastic optimization, Inexact High-Order Proximal-Point Methods with Auxiliary Search Procedure, The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization, Extending the Step-Size Restriction for Gradient Descent to Avoid Strict Saddle Points, Lower bounds for non-convex stochastic optimization, Non-asymptotic superlinear convergence of standard quasi-Newton methods, Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization, An accelerated first-order method for non-convex optimization on manifolds, Stopping rules for gradient methods for non-convex problems with additive noise in gradient, Complexity analysis of interior-point methods for second-order stationary points of nonlinear semidefinite optimization problems, SCORE: approximating curvature information under self-concordant regularization, Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search, An adaptive regularization method in Banach spaces, A fast and simple modification of Newton's method avoiding saddle points, Convergence properties of Levenberg-Marquardt methods with generalized regularization terms, A modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problems, Decentralized nonconvex optimization with guaranteed privacy and accuracy, A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization, An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors, A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees, Newton-MR: inexact Newton method with minimum residual sub-problem solver, A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression, Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees, Radial duality. II: Applications and algorithms, Accelerated methods for weakly-quasi-convex optimization problems, Super-Universal Regularized Newton Method, Combined Newton-gradient method for constrained root-finding in chemical reaction networks, Efficiency of higher-order algorithms for minimizing composite functions, Smooth monotone stochastic variational inequalities and saddle point problems: a survey, Adaptive Third-Order Methods for Composite Convex Optimization, Random Coordinate Descent Methods for Nonseparable Composite Optimization, Worst-case evaluation complexity of a derivative-free quadratic regularization method, Approximating Higher-Order Derivative Tensors Using Secant Updates, The evaluation complexity of finding high-order minimizers of nonconvex optimization, Gradient regularization of Newton method with Bregman distances, Recent Theoretical Advances in Non-Convex Optimization, Worst case complexity of direct search under convexity, Solving the Cubic Regularization Model by a Nested Restarting Lanczos Method, Inexact model: a framework for optimization and variational inequalities, Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities, High-Order Optimization Methods for Fully Composite Problems



Cites Work