Improve robustness of sparse PCA by L₁-norm maximization
From MaRDI portal
Publication:645882
DOI10.1016/J.PATCOG.2011.07.009zbMATH Open1225.68202OpenAlexW2066576934MaRDI QIDQ645882FDOQ645882
Authors: Qian Zhao, Deyu Meng, Zongben Xu
Publication date: 10 November 2011
Published in: Pattern Recognition (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.patcog.2011.07.009
Recommendations
Factor analysis and principal components; correspondence analysis (62H25) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Robust principal component analysis?
- Optimal solutions for sparse principal component analysis
- A Direct Formulation for Sparse PCA Using Semidefinite Programming
- An augmented Lagrangian approach for sparse principal component analysis
- Sparse principal component analysis via regularized low rank matrix approximation
- A framework for robust subspace learning
- Title not available (Why is that?)
Cited In (30)
- Joint sparse principal component analysis
- The worst-case discounted regret portfolio optimization problem
- Alternating maximization: unifying framework for 8 sparse PCA formulations and efficient parallel codes
- Robust projection twin support vector machine via DC programming
- Majorization-Minimization on the Stiefel Manifold With Application to Robust Sparse PCA
- New robust PCA for outliers and heavy sparse noises' detection via affine transformation, the \(L_{\ast, w}\) and \(L_{2,1}\) norms, and spatial weight matrix in high-dimensional images: from the perspective of signal processing
- Recursive linearization method for inverse medium scattering problems with complex mixture Gaussian error learning
- Combined supervised information with PCA via discriminative component selection
- A pure \(L_1\)-norm principal component analysis
- Robust capped L1-norm twin support vector machine
- 2DPCA with L1-norm for simultaneously robust and sparse modelling
- Self-paced multi-view co-training
- Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
- Improve robustness and accuracy of deep neural network with \(L_{2,\infty}\) normalization
- A new approach for worst-case regret portfolio optimization problem
- Robust L1 Principal Component Analysis and Its Bayesian Variational Inference
- Sparse and kernel OPLS feature extraction based on eigenvalue problem solving
- L1-norm-based principal component analysis with adaptive regularization
- Lp- and Ls-norm distance based robust linear discriminant analysis
- Statistical Inference for High-Dimensional Matrix-Variate Factor Models
- Title not available (Why is that?)
- Comparing classical and robust sparse PCA
- Flexible non-greedy discriminant subspace feature extraction
- Robust capped L1-norm projection twin support vector machine
- Robust sparse \(L_p\)-norm principal component analysis
- Avoiding optimal mean \(\ell_{2,1}\)-norm maximization-based robust PCA for reconstruction
- Center-based \(l_1\)-clustering method
- Robust sparse principal component analysis
- A joint-norm distance metric 2DPCA for robust dimensionality reduction
- Sparse principal component analysis via variable projection
This page was built for publication: Improve robustness of sparse PCA by \(L_{1}\)-norm maximization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q645882)