A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions
From MaRDI portal
Publication:829737
DOI10.1016/j.csda.2020.107105OpenAlexW2972219024MaRDI QIDQ829737
Chi Seng Pun, Matthew Zakharia Hadimaja
Publication date: 6 May 2021
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.csda.2020.107105
high-dimensional statisticslinear discriminant analysisprecision matrix estimation\( \ell_1\)-regularized quadratic programmingdirect estimation approachself-calibrated regularization
Related Items (2)
Multiclass sparse discriminant analysis incorporating graphical structure among predictors ⋮ A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Sparse inverse covariance estimation with the graphical lasso
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Gradient methods for minimizing composite functions
- Sparse linear discriminant analysis by thresholding for high dimensional data
- \(\ell_{1}\)-penalization for mixture regression models
- Optimal rates of convergence for covariance matrix estimation
- Covariance regularization by thresholding
- High-dimensional classification using features annealed independence rules
- Operator norm consistent estimation of large-dimensional sparse covariance matrices
- A linear programming model for selection of sparse high-dimensional multiperiod portfolios
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Sparse permutation invariant covariance estimation
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss
- Prediction error bounds for linear regression with the TREX
- Dynamic linear discriminant analysis in high dimensional space
- Regularized estimation of large covariance matrices
- Sparse Matrix Inversion with Scaled Lasso
- Resolution of Degeneracy in Merton's Portfolio Problem
- A Constrainedℓ1Minimization Approach to Sparse Precision Matrix Estimation
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Scaled sparse linear regression
- Extended Bayesian information criteria for model selection with large model spaces
- A Direct Estimation Approach to Sparse Linear Discriminant Analysis
- First-Order Methods for Sparse Covariance Selection
- Cellwise robust regularized discriminant analysis
- High Dimensional Linear Discriminant Analysis: Optimality, Adaptive Algorithm and Missing Data
- Regularization Parameter Selections via Generalized Information Criterion
- Sparse Gaussian graphical model estimation via alternating minimization
- Sparse precision matrix estimation via lasso penalized D-trace loss
- Tuning Parameter Selection in High Dimensional Penalized Likelihood
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
This page was built for publication: A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions