Constrained parameter estimation with uncertain priors for Bayesian networks (Q2412264): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
RedirectionBot (talk | contribs)
Removed claim: reviewed by (P1447): Item:Q959160
Property / reviewed by
 
Property / reviewed by: Q690326 / rank
Normal rank
 

Revision as of 14:33, 21 February 2024

scientific article
Language Label Description Also known as
English
Constrained parameter estimation with uncertain priors for Bayesian networks
scientific article

    Statements

    Constrained parameter estimation with uncertain priors for Bayesian networks (English)
    0 references
    0 references
    0 references
    0 references
    23 October 2017
    0 references
    Bayesian networks represent joint probability models among given variables. Each variable is represented by a node in a graph. The direct dependencies between the variables are represented by directed edges between the corresponding nodes and the probabilities conditioned on the various possible combinations of values for the immediate predecessors in the network. The uncertainty in the system is modelled using probability theory. The authors analyse the following model: \(G=(V,E)\) is a directed acyclic graph (DAG) where \(V\) is a set of random variables \(X_1,\dots,X_d\), and \(E\subset(V\times V)\) is a set of directed links between \(V\) elements. For each \(j=1,\dots,d\) the variable \(X_j\) takes values in the set \(X_j=\{x_j^{(1)},\dots,x_j^{(k_j)}\}\), and its predecessor variable \(\Lambda_j\) takes values in the set \(\{\lambda_j^{(1)},\dots,\lambda_j^{(q_j)}\}\). Let \(\theta\in\Theta\) denote the set of parameters defined by \[ \theta_{jil}=P(X_j=x_j^{(i)}|\Lambda_j=\lambda_j^{(l)}) \] for \(l=1,\dots,q_j\), \(i=1,\dots,k_j\), \(j=1,\dots,d\), with \[ \sum_{i=1}^{k_j}\theta_{jil}=1. \] The authors only consider discrete random variables, although there would not be much difficulty in transferring their results to non-discrete variables. In this paper, several estimators of the parameters are compared: classical as the maximum likelihood (ML), Bayesian as the maximum a posteriori (MAP), or the later mean (PM) criterion, and another estimator is proposed using the idea of constrained Bayesian (CB) estimation. According to several statisticians the main objective of statistics is to draw valid inferences from a data set. To do this in a systematic way a mathematical model is proposed. In order to do this, assumptions have to be made. Since one can never be sure that these assumptions are completely satisfied it is natural looking for techniques that withstand deviations from the hypothesis. In recent years such technics have been referred to as robust. Following this idea, a robust Bayesian estimator, SPRGM, is proposed. The performances of the five estimators considered are compared: ML, MAP, PM, CB, and SPRGM, using synthetic data and analyzing real data. In the final section of the paper, conclusions and discussion, the authors say: ``Our simulation study emphasizes that if the crisp prior is present, Bayes and CB rules are reliable methods\dots We emphasize that when the values of hyperparameters are not justifiably chosen, or when the exact prior knowledge is not available, SPRGM estimates outperform Bayes rules, as we should expect due to the fact that robust rules are aimed at global prevention of bad choices in a single prior.'' The paper is well written and organized. The proofs of the theorems are rigorous and well detailed as are the various examples that help to understand the subject.
    0 references
    Bayesian networks
    0 references
    posterior regret
    0 references
    constrained Bayes estimation
    0 references
    directed acyclic graph
    0 references
    robust Bayesian learning
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references