Structure learning for continuous time Bayesian networks via penalized likelihood
From MaRDI portal
Publication:6641037
Cites work
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A constraint-based algorithm for the structural learning of continuous-time Bayesian networks
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- An Exact Gibbs Sampler for the Markov-Modulated Poisson Process
- Chernoff-type bound for finite Markov chains
- Combined \(\ell_1\) and greedy \(\ell_0\) penalized least squares for linear model selection
- Cox's regression model for counting processes: A large sample study
- Estimation and selection via absolute penalized convex minimization and its multistage adaptive applications
- Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods
- Exponential martingales and changes of measure for counting processes
- Geometric ergodicity of Rao and Teh's algorithm for Markov jump processes and CTBNs
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- High-dimensional generalized linear models and the lasso
- Improving Lasso for model selection and prediction
- Mean field variational approximation for continuous-time Bayesian networks
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- Nonconcave penalized composite conditional likelihood estimation of sparse Ising models
- Oracle inequalities for the lasso in the Cox model
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Simulation from endpoint-conditioned, continuous-time Markov chains on a finite state space, with applications to molecular evolution
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse estimation in Ising model via penalized Monte Carlo methods
- Sparse inverse covariance estimation with the graphical lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Stochastic subgradient method converges on tame functions
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Structure learning for continuous time Bayesian networks via penalized likelihood
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6641037)