Nonparametric failure time: time-to-event machine learning with heteroskedastic Bayesian additive regression trees and low information omnibus Dirichlet process mixtures
From MaRDI portal
Publication:6589247
Cites work
- scientific article; zbMATH DE number 3795247 (Why is no real title available?)
- scientific article; zbMATH DE number 3385132 (Why is no real title available?)
- A Tutorial on Thompson Sampling
- A decision-theoretic generalization of on-line learning and an application to boosting
- Applied predictive modeling
- BART: Bayesian additive regression trees
- Bayesian Density Estimation and Inference Using Mixtures
- Bayesian regression trees for high-dimensional prediction and variable selection
- Bayesian survival tree ensembles with submodel shrinkage
- Bioinformatics. The machine learning approach.
- Decoupling shrinkage and selection in Bayesian linear models: a posterior summary perspective
- Efficient Metropolis-Hastings proposal mechanisms for Bayesian regression tree models
- Greedy function approximation: A gradient boosting machine.
- Heteroscedastic BART via Multiplicative Regression Trees
- Least squares regression with censored data
- Linear regression with censored data
- Low information omnibus (LIO) priors for Dirichlet process mixture models
- Random survival forests
- Regression analysis with randomly right-censored data
- Semiparametric Bayes hierarchical models with mean and variance constraints
- Slice sampling mixture models
- Survival analysis. Techniques for censored and truncated data.
- Variable Selection Via Thompson Sampling
- Variable selection for BART: an application to gene regulation
This page was built for publication: Nonparametric failure time: time-to-event machine learning with heteroskedastic Bayesian additive regression trees and low information omnibus Dirichlet process mixtures
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6589247)