First-Order Methods for Nonconvex Quadratic Minimization
DOI10.1137/20M1321759zbMath1459.65082arXiv2003.04546OpenAlexW3105314490MaRDI QIDQ5113167
Publication date: 3 June 2020
Published in: SIAM Review (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2003.04546
global optimizationNewton's methodKrylov subspace methodsgradient descenttrust-region methodscubic regularizationnonasymptotic convergencenonconvex quadratics
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Quadratic programming (90C20)
Related Items (5)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A linear-time algorithm for trust region problems
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- On parallel complexity of nonsmooth convex optimization
- Introductory lectures on convex optimization. A basic course.
- A geometric analysis of phase retrieval
- A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- Newton-type methods for non-convex optimization under inexact Hessian information
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Cubic regularization of Newton method and its global performance
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- Some NP-complete problems in quadratic and nonlinear programming
- Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
- Local Minimizers of Quadratic Functions on Euclidean Balls and Spheres
- A D.C. Optimization Algorithm for Solving the Trust-Region Subproblem
- Trust Region Methods
- Gradient Descent Learns Linear Dynamical Systems
- Globally Solving the Trust Region Subproblem Using Simple First-Order Methods
- Accelerated Methods for NonConvex Optimization
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Optimization Methods for Large-Scale Machine Learning
- Solving the Trust-Region Subproblem using the Lanczos Method
- Finding approximate local minima faster than gradient descent
- Tight query complexity lower bounds for PCA via finite sample deformed wigner law
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- A Second-Order Cone Based Approach for Solving the Trust-Region Subproblem and Its Variants
- On the Generalized Lanczos Trust-Region Method
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Methods of conjugate gradients for solving linear systems
This page was built for publication: First-Order Methods for Nonconvex Quadratic Minimization