A stochastic variance reduction method for PCA by an exact penalty approach
From MaRDI portal
Publication:4583459
zbMATH Open1395.62140MaRDI QIDQ4583459FDOQ4583459
Authors: Yoon Mo Jung, Sangwoon Yun, Jaehwa Lee
Publication date: 30 August 2018
Recommendations
- An augmented Lagrangian approach for sparse principal component analysis
- Sparse generalized principal component analysis for large-scale applications beyond Gaussianity
- Near-optimal stochastic approximation for online principal component estimation
- An algorithm for the principal component analysis of large data sets
- A randomized algorithm for principal component analysis
Factor analysis and principal components; correspondence analysis (62H25) Nonlinear programming (90C30)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- Limited memory block Krylov subspace optimization for computing dominant singular value decompositions
- Linear dimensionality reduction: survey, insights, and generalizations
- Minimizing finite sums with the stochastic average gradient
- Numerical methods for large eigenvalue problems
- Optimization methods for large-scale machine learning
- Principal component analysis.
- Stochastic dual coordinate ascent methods for regularized loss minimization
- The Matrix Eigenvalue Problem
- Trace-penalty minimization for large-scale eigenspace computation
- Two-Point Step Size Gradient Methods
- Unconstrained optimization models for computing several extreme eigenpairs of real symmetric matrices
Cited In (1)
This page was built for publication: A stochastic variance reduction method for PCA by an exact penalty approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4583459)