Recurrent neural networks for LU decomposition and Cholesky factorization
From MaRDI portal
Recommendations
Cites work
- A neural network for computing eigenvectors and eigenvalues
- A recurrent neural network for real-time matrix inversion
- Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems
- Fast linear system solution by neural networks
- Highly Parallel Sparse Cholesky Factorization
- Limiting Communication in Parallel Sparse Cholesky Factorization
- Massive memory buys little speed for complete, in-core sparse Cholesky factorizations on some scalar computers
- Neural network approach to computing matrix inversion
- Neural networks for solving systems of linear equations and related problems
- Neural networks for solving systems of linear equations. II. Minimax and least absolute value problems
- Recurrent neural networks for solving linear matrix equations
- Solving simultaneous linear equations using recurrent neural networks
- The accuracy of a parallel \(LU\) decomposition algorithm
- The average parallel complexity of Cholesky factorization
- Three-dimensional structured networks for matrix equation solving
- ``Neural computation of decisions in optimization problems
Cited in
(6)- Recurrent neural networks for synthesizing linear control systems via pole placement
- Analog approach for the eigen-decomposition of positive definite matrices
- Real-time computation of singular vectors
- A recurrent neural network for computing pseudoinverse matrices
- Real-time computation of the eigenvectors of a class of positive definite matrices
- Properties and computation of continuous-time solutions to linear systems
This page was built for publication: Recurrent neural networks for LU decomposition and Cholesky factorization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1324265)