Learning and generalisation. With applications to neural networks. (Q1856371)

From MaRDI portal
Revision as of 05:56, 5 March 2024 by Import240304020342 (talk | contribs) (Set profile property.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Learning and generalisation. With applications to neural networks.
scientific article

    Statements

    Learning and generalisation. With applications to neural networks. (English)
    0 references
    3 February 2003
    0 references
    In this second edition [for a review of the first one, see (1997; Zbl 0928.68061)], according to its preface, two main innovations are to be found: First, the hypothesis of independent and identically distributed samples of the learning algorithm is weakened to a mixing one which is seen to appear in certain Markov processes. Second, applications in systems science via the utilization of randomized algorithms is twofold: a) Many synthesis algorithms which are NP-hard in a deterministic context become P hard when randomized. b) Identification results provide finite-time estimates with the methods of this book while only asymptotic results are obtained in the classical way; this is useful in design when identification and control are combined. Some chapters have been modified to take into account of recent advances.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    mixing property
    0 references
    statistical learning
    0 references
    neural network
    0 references
    empirical means
    0 references
    identification
    0 references
    synthesis algorithms
    0 references
    NP-hard
    0 references
    finite-time estimates
    0 references
    0 references