scientific article; zbMATH DE number 7164734
From MaRDI portal
zbMath1440.90058MaRDI QIDQ5214226
Quanquan Gu, Pan Xu, Dongruo Zhou
Publication date: 7 February 2020
Full work available at URL: http://jmlr.csail.mit.edu/papers/v20/19-055.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Unnamed Item, Faster Riemannian Newton-type optimization by subsampling and cubic regularization, Recent Theoretical Advances in Non-Convex Optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- Trust-region and other regularisations of linear least-squares problems
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Matrix concentration inequalities via the method of exchangeable pairs
- Cubic regularization of Newton method and its global performance
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- The masked sample covariance estimator: an analysis using matrix concentration inequalities
- The Expected Norm of a Sum of Independent Random Matrices: An Elementary Approach
- Trust Region Methods
- Accelerated Methods for NonConvex Optimization
- Robust linear regression: A review and comparison
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Finding approximate local minima faster than gradient descent
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Most Tensor Problems Are NP-Hard
- An Introduction to Matrix Concentration Inequalities
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- Probability