Viterbi training in PRISM
From MaRDI portal
Publication:4592976
DOI10.1017/S1471068413000677zbMATH Open1379.68272arXiv1303.5659MaRDI QIDQ4592976FDOQ4592976
Publication date: 9 November 2017
Published in: Theory and Practice of Logic Programming (Search for Journal in Brave)
Abstract: VT (Viterbi training), or hard EM, is an efficient way of parameter learning for probabilistic models with hidden variables. Given an observation , it searches for a state of hidden variables that maximizes by coordinate ascent on parameters and . In this paper we introduce VT to PRISM, a logic-based probabilistic modeling system for generative models. VT improves PRISM in three ways. First VT in PRISM converges faster than EM in PRISM due to the VT's termination condition. Second, parameters learned by VT often show good prediction performance compared to those learned by EM. We conducted two parsing experiments with probabilistic grammars while learning parameters by a variety of inference methods, i.e. VT, EM, MAP and VB. The result is that VT achieved the best parsing accuracy among them in both experiments. Also we conducted a similar experiment for classification tasks where a hidden variable is not a prediction target unlike probabilistic grammars. We found that in such a case VT does not necessarily yield superior performance. Third since VT always deals with a single probability of a single explanation, Viterbi explanation, the exclusiveness condition that is imposed on PRISM programs is no more required if we learn parameters by VT. Last but not least we can say that as VT in PRISM is general and applicable to any PRISM program, it largely reduces the need for the user to develop a specific VT algorithm for a specific model. Furthermore since VT in PRISM can be used just by setting a PRISM flag appropriately, it makes VT easily accessible to (probabilistic) logic programmers. To appear in Theory and Practice of Logic Programming (TPLP).
Full work available at URL: https://arxiv.org/abs/1303.5659
Recommendations
Cites Work
- The PITA system: Tabling and answer subsumption for reasoning under uncertainty
- Title not available (Why is that?)
- Not so naive Bayes: Aggregating one-dependence estimators
- Bayesian network classifiers
- Probabilistic inductive logic programming. Theory and applications
- Title not available (Why is that?)
- Evaluating Learning Algorithms
- Title not available (Why is that?)
- Probabilistic Inductive Logic Programming
- ADJUSTED VITERBI TRAINING
- The segmental K-means algorithm for estimating parameters of hidden Markov models
- Variational Bayes via propositionalized probability computation in PRISM
- On the Efficient Execution of ProbLog Programs
- A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models
- Linear tabling strategies and optimizations
Cited In (5)
- Learning to rank in PRISM
- PRISM revisited: declarative implementation of a probabilistic programming language using multi-prompt delimited control
- Symbolic DNN-tuner
- Handling ties correctly and efficiently in Viterbi training using the Viterbi semiring
- Lifted discriminative learning of probabilistic logic programs
Uses Software
This page was built for publication: Viterbi training in PRISM
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4592976)