Probabilistic confusion entropy for evaluating classifiers (Q280701): Difference between revisions
From MaRDI portal
Changed an Item |
Changed an Item |
||
Property / describes a project that uses | |||
Property / describes a project that uses: rms / rank | |||
Normal rank |
Revision as of 17:05, 29 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Probabilistic confusion entropy for evaluating classifiers |
scientific article |
Statements
Probabilistic confusion entropy for evaluating classifiers (English)
0 references
10 May 2016
0 references
Summary: For evaluating the classification model of an information system, a proper measure is usually needed to determine if the model is appropriate for dealing with the specific domain task. Though many performance measures have been proposed, few measures were specially defined for multi-class problems, which tend to be more complicated than two-class problems, especially in addressing the issue of class discrimination power. Confusion entropy was proposed for evaluating classifiers in the multi-class case. Nevertheless, it makes no use of the probabilities of samples classified into different classes. In this paper, we propose to calculate confusion entropy based on a probabilistic confusion matrix. Besides inheriting the merit of measuring if a classifier can classify with high accuracy and class discrimination power, probabilistic confusion entropy also tends to measure if samples are classified into true classes and separated from others with high probabilities. Analysis and experimental comparisons show the feasibility of the simply improved measure and demonstrate that the measure does not stand or fall over the classifiers on different datasets in comparison with the compared measures.
0 references
confusion entropy
0 references
probabilistic confusion entropy
0 references
multi-class classification
0 references