Auto-association by multilayer perceptrons and singular value decomposition (Q1106762): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claim: author (P16): Item:Q2165367
Set OpenAlex properties.
 
(4 intermediate revisions by 4 users not shown)
Property / author
 
Property / author: Hervé Bourlard / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / Wikidata QID
 
Property / Wikidata QID: Q28292671 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4103002 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Updating the singular value decomposition / rank
 
Normal rank
Property / cites work
 
Property / cites work: Least squares, singular values and matrix approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5185900 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4057472 / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/bf00332918 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2017257315 / rank
 
Normal rank

Latest revision as of 09:11, 30 July 2024

scientific article
Language Label Description Also known as
English
Auto-association by multilayer perceptrons and singular value decomposition
scientific article

    Statements

    Auto-association by multilayer perceptrons and singular value decomposition (English)
    0 references
    0 references
    0 references
    1988
    0 references
    The multilayer perceptron, when working in auto-association mode, is sometimes considered as an interesting candidate to perform data compression or dimensionality reduction of the feature space in information processing applications. The present paper shows that, for auto-association, the nonlinearities of the hidden units are useless and that the optimal parameter values can be derived directly by purely linear techniques relying on singular value decomposition and low rank matrix approximation, similar in spirit to the well-known Karhunen-Loève transform. This approach appears thus as an efficient alternative to the general error back-propagation algorithm commonly used for training multilayer perceptrons. Moreover, it also gives a clear interpretation of the role of the different parameters.
    0 references
    neural networks
    0 references
    multilayer perceptron
    0 references
    data compression
    0 references
    dimensionality reduction
    0 references
    information processing
    0 references
    auto-association
    0 references
    nonlinearities of the hidden units
    0 references
    singular value decomposition
    0 references
    low rank matrix approximation
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references