Parallel integrative learning for large-scale multi-response regression with incomplete outcomes
From MaRDI portal
Publication:2242011
DOI10.1016/J.CSDA.2021.107243OpenAlexW3150901058MaRDI QIDQ2242011FDOQ2242011
Daoji Li, Zemin Zheng, Ruipeng Dong
Publication date: 9 November 2021
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.05076
Recommendations
- Scalable interpretable learning for multi-response error-in-variables regression
- Large-Scale Inference of Multivariate Regression for Heavy-Tailed and Asymmetric Data
- Parallel maximum likelihood estimator for multiple linear regression models
- Least square regularized regression for multitask learning
- A note on multivariate parallel regression
- Scalable interpretable multi-response regression via SEED
- Additive partially linear models for massive heterogeneous data
- High-Dimensional Multi-Task Learning using Multivariate Regression and Generalized Fiducial Inference
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- SOFAR: Large-Scale Association Network Learning
- A unified approach to model selection and sparse recovery using regularized least squares
- Leveraging mixed and incomplete outcomes via reduced-rank modeling
- The Adaptive Lasso and Its Oracle Properties
- Least angle regression. (With discussion)
- Pathwise coordinate optimization
- Coordinate descent algorithms for lasso penalized regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Title not available (Why is that?)
- Sparsity oracle inequalities for the Lasso
- A useful variant of the Davis–Kahan theorem for statisticians
- Reduced-rank regression for the multivariate linear model
- Noisy low-rank matrix completion with general sampling distribution
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
- Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection
- Optimal selection of reduced rank estimators of high-dimensional matrices
- Reduced Rank Stochastic Regression with a Sparse Singular value Decomposition
- Joint variable and rank selection for parsimonious estimation of high-dimensional matrices
- Tuning Parameter Selection in High Dimensional Penalized Likelihood
- Estimation and hypothesis test for partial linear multiplicative models
- Non-asymptotic theory of random matrices: extreme singular values
- Generalized high-dimensional trace regression via nuclear norm regularization
- Multivariate spatial autoregressive model for large scale social networks
- A note on rank reduction in sparse multivariate regression
- Nonsparse Learning with Latent Variables
- Dimensionality Reduction and Variable Selection in Multivariate Varying-Coefficient Models With a Large Number of Covariates
- Title not available (Why is that?)
- Estimation and inference in semiparametric quantile factor models
Cited In (2)
Uses Software
This page was built for publication: Parallel integrative learning for large-scale multi-response regression with incomplete outcomes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2242011)