A lightweight data preprocessing strategy with fast contradiction analysis for incremental classifier learning (Q1664667)

From MaRDI portal
scientific article
Language Label Description Also known as
English
A lightweight data preprocessing strategy with fast contradiction analysis for incremental classifier learning
scientific article

    Statements

    A lightweight data preprocessing strategy with fast contradiction analysis for incremental classifier learning (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    27 August 2018
    0 references
    Summary: A prime objective in constructing data streaming mining models is to achieve good accuracy, fast learning, and robustness to noise. Although many techniques have been proposed in the past, efforts to improve the accuracy of classification models have been somewhat disparate. These techniques include, but are not limited to, feature selection, dimensionality reduction, and the removal of noise from training data. One limitation common to all of these techniques is the assumption that the full training dataset must be applied. Although this has been effective for traditional batch training, it may not be practical for incremental classifier learning, also known as data stream mining, where only a single pass of the data stream is seen at a time. Because data streams can amount to infinity and the so-called big data phenomenon, the data preprocessing time must be kept to a minimum. This paper introduces a new data preprocessing strategy suitable for the progressive purging of noisy data from the training dataset without the need to process the whole dataset at one time. This strategy is shown via a computer simulation to provide the significant benefit of allowing for the dynamic removal of bad records from the incremental classifier learning process.
    0 references

    Identifiers