Global convergence of a PCA learning algorithm with a constant learning rate
From MaRDI portal
Publication:2469892
Recommendations
- Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate
- Convergence analysis of Chauvin's PCA learning algorithm with a constant learning rate
- A robust and globally convergent PCA learning algorithm
- Stability and Convergence of Principal Component Learning Algorithms
- Convergence rates of learning algorithms by random projection
- Convergence Analysis of a Class of HyvÄrinen–Oja's ICA Learning Algorithms With Constant Learning Rates
- scientific article; zbMATH DE number 1780125
- Convergence of algorithms used for principal component analysis
Cites work
- A simplified neuron model as a principal component analyzer
- Adaptive algorithms for first principal eigenvector computation
- Analysis of recursive stochastic algorithms
- Generalized neural networks for spectral analysis: dynamics and Liapunov functions
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
Cited in
(6)- Convergence analysis for principal component flows.
- Convergence analysis of Chauvin's PCA learning algorithm with a constant learning rate
- A robust and globally convergent PCA learning algorithm
- Stability and Convergence of Principal Component Learning Algorithms
- Convergence analysis of deterministic discrete time system of a unified self-stabilizing algorithm for PCA and MCA
- Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate
This page was built for publication: Global convergence of a PCA learning algorithm with a constant learning rate
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2469892)