Neural networks for computing eigenvalues and eigenvectors (Q1202354): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Unconstrained Variational Principles for Eigenvalues of Real Symmetric Matrices / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neural networks for solving systems of linear equations and related problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neural networks for computing eigenvalues and eigenvectors / rank
 
Normal rank
Property / cites work
 
Property / cites work: Systolic designs for eigenvalue-eigenvector computations using matrix powers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4226179 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neurons with graded response have collective computational properties like those of two-state neurons. / rank
 
Normal rank
Property / cites work
 
Property / cites work: On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix / rank
 
Normal rank
Property / cites work
 
Property / cites work: A neural network for computing eigenvectors and eigenvalues / rank
 
Normal rank
Property / cites work
 
Property / cites work: A survey of conjugate gradient algorithms for solution of extreme eigen-problems of a symmetric matrix / rank
 
Normal rank

Latest revision as of 13:19, 17 May 2024

scientific article
Language Label Description Also known as
English
Neural networks for computing eigenvalues and eigenvectors
scientific article

    Statements

    Neural networks for computing eigenvalues and eigenvectors (English)
    0 references
    0 references
    0 references
    23 February 1993
    0 references
    The authors consider the problem of computing an eigendecomposition of a square matrix. They formulate the problem as a constrained optimization problem and construct a penalty function to be minimized. They solve the resulting unconstrained optimization problem by designing neural networks and applying a back-propagation learning scheme, which is similar to the steepest descent algorithm in numerical optimization parlance. The result of numerical simulations on some small test problems are presented.
    0 references
    eigenvalues
    0 references
    eigenvectors
    0 references
    eigendecomposition
    0 references
    unconstrained optimization
    0 references
    neural networks
    0 references
    steepest descent algorithm
    0 references
    test problems
    0 references

    Identifiers