Efficient approximation of the conditional relative entropy with applications to discriminative learning of Bayesian network classifiers (Q280459): Difference between revisions
From MaRDI portal
Created a new Item |
Normalize DOI. |
||
(6 intermediate revisions by 6 users not shown) | |||
Property / DOI | |||
Property / DOI: 10.3390/e15072716 / rank | |||
Property / review text | |||
Summary: We propose a minimum variance unbiased approximation to the conditional relative entropy of the distribution induced by the observed frequency estimates, for multi-classification tasks. Such approximation is an extension of a decomposable scoring criterion, named approximate conditional log-likelihood (aCLL), primarily used for discriminative learning of augmented Bayesian network classifiers. Our contribution is twofold: (i) it addresses multi-classification tasks and not only binary-classification ones; and (ii) it covers broader stochastic assumptions than uniform distribution over the parameters. Specifically, we considered a Dirichlet distribution over the parameters, which was experimentally shown to be a very good approximation to CLL. In addition, for Bayesian network classifiers, a closed-form equation is found for the parameters that maximize the scoring criterion. | |||
Property / review text: Summary: We propose a minimum variance unbiased approximation to the conditional relative entropy of the distribution induced by the observed frequency estimates, for multi-classification tasks. Such approximation is an extension of a decomposable scoring criterion, named approximate conditional log-likelihood (aCLL), primarily used for discriminative learning of augmented Bayesian network classifiers. Our contribution is twofold: (i) it addresses multi-classification tasks and not only binary-classification ones; and (ii) it covers broader stochastic assumptions than uniform distribution over the parameters. Specifically, we considered a Dirichlet distribution over the parameters, which was experimentally shown to be a very good approximation to CLL. In addition, for Bayesian network classifiers, a closed-form equation is found for the parameters that maximize the scoring criterion. / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 94A17 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 68T05 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 62H30 / rank | |||
Normal rank | |||
Property / zbMATH DE Number | |||
Property / zbMATH DE Number: 6578303 / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
conditional relative entropy | |||
Property / zbMATH Keywords: conditional relative entropy / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
approximation | |||
Property / zbMATH Keywords: approximation / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
discriminative learning | |||
Property / zbMATH Keywords: discriminative learning / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
Bayesian network classifiers | |||
Property / zbMATH Keywords: Bayesian network classifiers / rank | |||
Normal rank | |||
Property / Wikidata QID | |||
Property / Wikidata QID: Q59196638 / rank | |||
Normal rank | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: Publication / rank | |||
Normal rank | |||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.3390/e15072716 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2065733810 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3997653 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Bayesian network classifiers / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: On the optimality of the simple Bayesian classifier under zero-one loss / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q5396675 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3174036 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3093225 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Approximating probabilistic inference in Bayesian belief networks is NP- hard / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Optimum branchings / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Approximating discrete probability distributions with dependence trees / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3655273 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: 10.1162/153244302760200696 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Approximating probability distributions to reduce storage requirements / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3159164 / rank | |||
Normal rank | |||
Property / DOI | |||
Property / DOI: 10.3390/E15072716 / rank | |||
Normal rank | |||
links / mardi / name | links / mardi / name | ||
Latest revision as of 16:38, 8 December 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Efficient approximation of the conditional relative entropy with applications to discriminative learning of Bayesian network classifiers |
scientific article |
Statements
Efficient approximation of the conditional relative entropy with applications to discriminative learning of Bayesian network classifiers (English)
0 references
10 May 2016
0 references
Summary: We propose a minimum variance unbiased approximation to the conditional relative entropy of the distribution induced by the observed frequency estimates, for multi-classification tasks. Such approximation is an extension of a decomposable scoring criterion, named approximate conditional log-likelihood (aCLL), primarily used for discriminative learning of augmented Bayesian network classifiers. Our contribution is twofold: (i) it addresses multi-classification tasks and not only binary-classification ones; and (ii) it covers broader stochastic assumptions than uniform distribution over the parameters. Specifically, we considered a Dirichlet distribution over the parameters, which was experimentally shown to be a very good approximation to CLL. In addition, for Bayesian network classifiers, a closed-form equation is found for the parameters that maximize the scoring criterion.
0 references
conditional relative entropy
0 references
approximation
0 references
discriminative learning
0 references
Bayesian network classifiers
0 references