Learning a factor model via regularized PCA
From MaRDI portal
Publication:399883
DOI10.1007/s10994-013-5345-8zbMath1293.68230arXiv1111.6201OpenAlexW3098601467MaRDI QIDQ399883
Publication date: 20 August 2014
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1111.6201
regularizationprincipal component analysishigh-dimensional datafactor modelcovariance matrix estimation
Factor analysis and principal components; correspondence analysis (62H25) Learning and adaptive systems in artificial intelligence (68T05) Analysis of variance and covariance (ANOVA) (62J10)
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Sparse inverse covariance estimation with the graphical lasso
- Latent variable graphical model selection via convex optimization
- Covariance estimation: the GLM and regularization perspectives
- High-dimensional analysis of semidefinite relaxations for sparse principal components
- Factor analysis and AIC
- EM algorithms for ML factor analysis
- Adaptive estimation of a quadratic functional by model selection.
- On the distribution of the largest eigenvalue in principal components analysis
- Robust factor analysis.
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Eigenvalues of large sample covariance matrices of spiked population models
- Robust principal component analysis?
- Model selection and estimation in the Gaussian graphical model
- Probabilistic Principal Component Analysis
- A Direct Formulation for Sparse PCA Using Semidefinite Programming
This page was built for publication: Learning a factor model via regularized PCA