Towards explicit superlinear convergence rate for SR1
From MaRDI portal
Publication:6038671
Abstract: We study the convergence rate of the famous Symmetric Rank-1 (SR1) algorithm which has wide applications in different scenarios. Although it has been extensively investigated, SR1 still lacks a non-asymptotic superlinear rate compared with other quasi-Newton methods such as DFP and BFGS. In this paper we address this problem. Inspired by the recent work on explicit convergence analysis of quasi-Newton methods, we obtain the first explicit non-asymptotic rates of superlinear convergence for the vanilla SR1 methods with correction strategy to achieve the numerical stability. Specifically, the vanilla SR1 with the correction strategy achieves the rates of the form for general smooth strongly-convex functions where is the iteration counter, is the condition number of the objective function and is the dimension of the problem. For the quadratic function, the vanilla SR1 algorithm can find the optima of the objective function at most steps.
Recommendations
- Rates of superlinear convergence for classical quasi-Newton methods
- Convergence of Symmetric Rank-One method based on Modified Quasi-Newton equation
- A Theoretical and Experimental Study of the Symmetric Rank-One Update
- Non-asymptotic superlinear convergence of standard quasi-Newton methods
- New results on superlinear convergence of classical quasi-Newton methods
Cites work
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A Family of Variable-Metric Methods Derived by Variational Means
- A Tool for the Analysis of Quasi-Newton Methods with Application to Unconstrained Minimization
- A new approach to variable metric algorithms
- A stochastic quasi-Newton method for large-scale optimization
- A stochastic quasi-Newton method for simulation response optimization
- Analysis of a Symmetric Rank-One Trust Region Method
- Conditioning of Quasi-Newton Methods for Function Minimization
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Global Convergence of a Cass of Quasi-Newton Methods on Convex Problems
- Greedy quasi-Newton methods with explicit superlinear convergence
- New results on superlinear convergence of classical quasi-Newton methods
- On the Behavior of Broyden’s Class of Quasi-Newton Methods
- On the Convergence of the Variable Metric Algorithm
- On the Local and Superlinear Convergence of Quasi-Newton Methods
- Quasi Newton techniques generate identical points II: The proofs of four new theorems
- Quasi-Newton methods for machine learning: forget the past, just sample
- Quasi-Newton methods for solving multiobjective optimization
- Quasi-newton algorithms generate identical points
- Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms
- Rates of superlinear convergence for classical quasi-Newton methods
- The Convergence of a Class of Double-rank Minimization Algorithms
- The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
- The superlinear convergence of a modified BFGS-type method for unconstrained optimization
Cited in
(3)
This page was built for publication: Towards explicit superlinear convergence rate for SR1
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6038671)