Transfer learning for tensor Gaussian graphical models
From MaRDI portal
Publication:87995
DOI10.48550/ARXIV.2211.09391arXiv2211.09391MaRDI QIDQ87995FDOQ87995
Yaoming Zhen, Mingyang Ren, Junhui Wang
Publication date: 17 November 2022
Abstract: Tensor Gaussian graphical models (GGMs), interpreting conditional independence structures within tensor data, have important applications in numerous areas. Yet, the available tensor data in one single study is often limited due to high acquisition costs. Although relevant studies can provide additional data, it remains an open question how to pool such heterogeneous data. In this paper, we propose a transfer learning framework for tensor GGMs, which takes full advantage of informative auxiliary domains even when non-informative auxiliary domains are present, benefiting from the carefully designed data-adaptive weights. Our theoretical analysis shows substantial improvement of estimation errors and variable selection consistency on the target domain under much relaxed conditions, by leveraging information from auxiliary domains. Extensive numerical experiments are conducted on both synthetic tensor graphs and a brain functional connectivity network data, which demonstrates the satisfactory performance of the proposed method.
Cited In (2)
This page was built for publication: Transfer learning for tensor Gaussian graphical models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q87995)