Estimation of high-dimensional partially-observed discrete Markov random fields
From MaRDI portal
Abstract: We consider the estimation of high-dimensional network structures from partially observed Markov random field data using a penalized pseudo-likelihood approach. We fit a misspecified model obtained by ignoring the missing data problem. We study the consistency of the estimator and derive a bound on its rate of convergence. The results obtained relate the rate of convergence of the estimator to the extent of the missing data problem. We report some simulation results that empirically validate some of the theoretical findings.
Recommendations
- Parameter estimation for Gibbs distributions from partially observed data
- High-dimensional covariance matrix estimation with missing observations
- Sparse estimation in Ising model via penalized Monte Carlo methods
- scientific article; zbMATH DE number 218215
- Reconstruction of Markov Random Fields from Samples: Some Observations and Algorithms
Cites work
- scientific article; zbMATH DE number 3513115 (Why is no real title available?)
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods
- First-Order Methods for Sparse Covariance Selection
- Gibbs measures and phase transitions
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- High-dimensional graphs and variable selection with the Lasso
- Lasso-type recovery of sparse representations for high-dimensional data
- Model selection and estimation in the Gaussian graphical model
- Model selection for Gaussian concentration graphs
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- Nonconcave penalized composite conditional likelihood estimation of sparse Ising models
- Pairwise Variable Selection for High-Dimensional Model-Based Clustering
- Regularized estimation of large covariance matrices
- Rejoinder: Latent variable graphical model selection via convex optimization
- Self-concordant analysis for logistic regression
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse permutation invariant covariance estimation
- Sparsistency and rates of convergence in large covariance matrix estimation
Cited in
(4)- Log-determinant relaxation for approximate inference in discrete Markov random fields
- Markov Neighborhood Regression for High-Dimensional Inference
- Structure recovery for partially observed discrete Markov random fields on graphs under not necessarily positive distributions
- Model selection for Markov random fields on graphs under a mixing condition
This page was built for publication: Estimation of high-dimensional partially-observed discrete Markov random fields
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q470504)