Estimating sparse networks with hubs
From MaRDI portal
Asymptotic properties of parametric estimators (62F12) Applications of statistics to biology and medical sciences; meta analysis (62P10) Gaussian processes (60G15) Estimation in multivariate analysis (62H12) Probabilistic graphical models (62H22) Ridge regression; shrinkage estimators (Lasso) (62J07)
Abstract: Graphical modelling techniques based on sparse selection have been applied to infer complex networks in many fields, including biology and medicine, engineering, finance, and social sciences. One structural feature of some of the networks in such applications that poses a challenge for statistical inference is the presence of a small number of strongly interconnected nodes in a network which are called hubs. For example, in microbiome research hubs or microbial taxa play a significant role in maintaining stability of the microbial community structure. In this paper, we investigate the problem of estimating sparse networks in which there are a few highly connected hub nodes. Methods based on L1-regularization have been widely used for performing sparse selection in the graphical modelling context. However, while these methods encourage sparsity, they do not take into account structural information of the network. We introduce a new method for estimating networks with hubs that exploits the ability of (inverse) covariance selection methods to include structural information about the underlying network. Our proposed method is a weighted lasso approach with novel row/column sum weights, which we refer to as the hubs weighted graphical lasso. We establish large sample properties of the method when the number of parameters diverges with the sample size, and evaluate its finite sample performance via extensive simulations. We illustrate the method with an application to microbiome data.
Recommendations
Cites work
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- An overview of the estimation of large covariance and precision matrices
- Consistency of the group Lasso and multiple kernel learning
- Emergence of Scaling in Random Networks
- High-dimensional graphs and variable selection with the Lasso
- Hub Discovery in Partial Correlation Graphs
- Large covariance estimation for compositional data via composition-adjusted thresholding
- Learning graphical models with hubs
- Likelihood-based selection and sharp parameter estimation
- Model selection and estimation in the Gaussian graphical model
- Network exploration via the adaptive LASSO and SCAD penalties
- Node-based learning of multiple Gaussian graphical models
- Regularized estimation of large covariance matrices
- Sparse inverse covariance estimation with the graphical lasso
- Sparse permutation invariant covariance estimation
- Sparsistency and rates of convergence in large covariance matrix estimation
- The Adaptive Lasso and Its Oracle Properties
- Tuning parameter selection for penalized likelihood estimation of Gaussian graphical model
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(8)- Estimation of Sparse Directional Connectivity With Expectation Maximization
- Estimating scale-free networks via the exponentiation of minimax concave penalty
- Detection of hubs in complex networks by the Laplacian matrix
- Maximum likelihood estimation of sparse networks with missing observations
- Analysis of Networks via the Sparseβ-model
- Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression
- Learning graphical models with hubs
- Robust methods for inferring sparse network structures
This page was built for publication: Estimating sparse networks with hubs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2196140)