Parameter redundancy in neural networks: an application of Chebyshev polynomials (Q2468329)

From MaRDI portal
Revision as of 15:45, 27 June 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Parameter redundancy in neural networks: an application of Chebyshev polynomials
scientific article

    Statements

    Parameter redundancy in neural networks: an application of Chebyshev polynomials (English)
    0 references
    0 references
    22 January 2008
    0 references
    The paper considers MLP NN architectures with one single layer. The sigmoid activation function at each hidden node is approximated by Chebyshev polynomials and the result is subsequently inserted into the function modelling the output from network. Identification and parameter redundancy issues for some particular cases are discussed.
    0 references
    0 references
    neural network
    0 references
    multi layer perceptron neural network (MLP NN) architecture
    0 references
    Chebyshev polynomial
    0 references
    parameter redundancy
    0 references
    0 references