scientific article; zbMATH DE number 7164707
From MaRDI portal
Publication:5214193
zbMath1441.62214arXiv1608.03686MaRDI QIDQ5214193
Yan Liu, M. Taha Bahadori, Zemin Zheng, Jinchi Lv
Publication date: 7 February 2020
Full work available at URL: https://arxiv.org/abs/1608.03686
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
greedy algorithmscalabilityhigh dimensionalityreduced-rank regressionsequential estimation with eigen-decomposition (SEED)sparse eigenvector estimation
Estimation in multivariate analysis (62H12) General nonlinear regression (62J02) Sequential estimation (62L12) Statistical aspects of big data and data science (62R07)
Related Items
Estimation of Low Rank High-Dimensional Multivariate Linear Models for Multi-Response Data, Communication-efficient estimation for distributed subset selection, Scalable interpretable learning for multi-response error-in-variables regression, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, Controlling the false discovery rate for latent factors via unit-rank deflation, A polynomial algorithm for best-subset selection problem, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Reduced rank regression via adaptive nuclear norm penalization
- Latent variable graphical model selection via convex optimization
- Sparse principal component analysis and iterative thresholding
- Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions
- Statistics for high-dimensional data. Methods, theory and applications.
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Optimal selection of reduced rank estimators of high-dimensional matrices
- Joint variable and rank selection for parsimonious estimation of high-dimensional matrices
- Generalized co-sparse factor regression
- Reduced-rank regression for the multivariate linear model
- Multivariate reduced-rank regression
- A framework for robust subspace learning
- Adaptive estimation of a quadratic functional by model selection.
- On the distribution of the largest eigenvalue in principal components analysis
- A note on rank reduction in sparse multivariate regression
- Sparsistency and agnostic inference in sparse PCA
- Sparse PCA: optimal rates and adaptive estimation
- A General Framework For Consistency of Principal Component Analysis
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
- A Singular Value Thresholding Algorithm for Matrix Completion
- Decoding by Linear Programming
- Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression
- Provable Subspace Clustering: When LRR Meets SSC
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- Model Selection and Estimation in Regression with Grouped Variables
- Reduced Rank Stochastic Regression with a Sparse Singular value Decomposition
- Tuning Parameter Selection in High Dimensional Penalized Likelihood
- High Dimensional Thresholded Regression and Shrinkage Effect