Newton-type methods for non-convex optimization under inexact Hessian information
From MaRDI portal
Publication:2205970
DOI10.1007/s10107-019-01405-zzbMath1451.90134arXiv1708.07164OpenAlexW2963307318WikidataQ127827289 ScholiaQ127827289MaRDI QIDQ2205970
Publication date: 21 October 2020
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1708.07164
non-convex optimizationtrust regioncubic regularizationrandomized numerical linear algebrainexact Hessian
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Methods of quasi-Newton type (90C53)
Related Items
Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ Unnamed Item ⋮ Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models ⋮ Tensor Bernstein concentration inequalities with an application to sample estimators for high-order moments ⋮ Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation ⋮ Convergence analysis of a subsampled Levenberg-Marquardt algorithm ⋮ An overview of stochastic quasi-Newton methods for large-scale machine learning ⋮ Inexact restoration with subsampled trust-region methods for finite-sum minimization ⋮ Globally Convergent Multilevel Training of Deep Residual Networks ⋮ An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors ⋮ Newton-MR: inexact Newton method with minimum residual sub-problem solver ⋮ Faster Riemannian Newton-type optimization by subsampling and cubic regularization ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization ⋮ First-Order Methods for Nonconvex Quadratic Minimization ⋮ Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points ⋮ The impact of noise on evaluation complexity: the deterministic trust-region case ⋮ Recent Theoretical Advances in Non-Convex Optimization ⋮ Global Convergence of Policy Gradient Methods to (Almost) Locally Optimal Policies ⋮ Convergence of Newton-MR under Inexact Hessian Information ⋮ An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity ⋮ A generalized worst-case complexity analysis for non-monotone line searches ⋮ Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives ⋮ An Inertial Newton Algorithm for Deep Learning ⋮ Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization ⋮ A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization ⋮ On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization ⋮ Linesearch Newton-CG methods for convex optimization with noise ⋮ Convergence Analysis of Inexact Randomized Iterative Methods ⋮ Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Stochastic derivative-free optimization using a trust region framework
- A linear-time algorithm for trust region problems
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- User-friendly tail bounds for sums of random matrices
- On solving trust-region and other regularised subproblems in optimization
- Accelerating the cubic regularization of Newton's method on convex problems
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Stochastic optimization using a trust-region method and random models
- Sub-sampled Newton methods
- Data completion and stochastic algorithms for PDE inversion problems with many measurements
- Cubic regularization of Newton method and its global performance
- On the use of iterative methods in cubic regularization for unconstrained optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Computational Advertising: Techniques for Targeting Relevant Ads
- Stochastic Algorithms for Inverse Problems Involving PDEs and many Measurements
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Randomized Algorithms for Matrices and Data
- Minimization of a Large-Scale Quadratic FunctionSubject to a Spherical Constraint
- Computing a Trust Region Step
- Assessing Stochastic Algorithms for Large Scale Nonlinear Least Squares Problems Using Extremal Probabilities of Linear Combinations of Gamma Random Variables
- A recursive Formula-trust-region method for bound-constrained nonlinear optimization
- Iterative Methods for Finding a Trust-region Step
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- Newton’s Method with a Model Trust Region Modification
- Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
- Complexity and global rates of trust-region methods based on probabilistic models
- ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization
- trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem
- Optimization Methods for Large-Scale Machine Learning
- Solving the Trust-Region Subproblem using the Lanczos Method
- The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Finding approximate local minima faster than gradient descent
- Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Understanding Machine Learning
- An Introduction to Matrix Concentration Inequalities
- The elements of statistical learning. Data mining, inference, and prediction