Dimension reduction for predictive discrimination (Q1081252)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Dimension reduction for predictive discrimination
scientific article

    Statements

    Dimension reduction for predictive discrimination (English)
    0 references
    0 references
    0 references
    0 references
    1986
    0 references
    Let the discrimination procedure for the classes \(\Pi_ i\) be based on the density functions \(P(x| \Pi_ i,\theta)\). Two alternative approaches are possible when the parameters \(\theta\) are unknown. The ''estimative'' approach preserves the form of the function but replaces \(\theta\) with some estimate \({\hat \theta}\). The ''predictive'' approach replaces \(P(x| \Pi_ i,\theta)\) by \[ P^*(x| \Pi_ i,z_ i)=\int_{\theta}P(x| \Pi_ i,\theta)P(\theta | z_ i)d\theta, \] where \(P(\theta | z_ i)\) is a Bayesian posterior density function and \(z_ i\) are the sets of training data. When \(P(x| \Pi_ i,\theta)\) is p-dimensional multivariate normal, \(P^*(x| \Pi_ i,z_ i)\) is a p-dimensional Student-type density function, \(X\sim St_ p(n_ i,\bar z_ i,S_ i)\) where \(S_ i\) is the usual sample covariance matrix. If B is a \(q\times p\) \((q<p)\) matrix of rank q, then \(Y=BX\sim St_ q(n,B\bar z,BSB')\). A method for constructing the linear dimension-reduction matrix is derived, and the procedure is proposed to indicate the lowest dimension that preserves the original expected probability of correct classification for each given case. This method avoids the inversion of high-dimensional matrices and allows to reduce the necessary size of samples. A Monte Carlo simulation is performed with three multivariate normal populations of dimension six. The predictive discrimination method in conjunction with the proposed dimension-reduction method yields results superior to the usual linear discrimination procedure as the last assumes equal population covariance matrices.
    0 references
    0 references
    pattern recognition
    0 references
    linear feature selection
    0 references
    Bayesian posterior density
    0 references
    training data
    0 references
    multivariate normal
    0 references
    p-dimensional Student-type density
    0 references
    probability of correct classification
    0 references
    Monte Carlo simulation
    0 references
    predictive discrimination
    0 references
    dimension-reduction method
    0 references
    0 references