Sparse PCA: convex relaxations, algorithms and applications

From MaRDI portal
Publication:2802550

DOI10.1007/978-1-4614-0769-0_31zbMATH Open1334.90120arXiv1011.3781OpenAlexW1599867596MaRDI QIDQ2802550FDOQ2802550


Authors: Youwei Zhang, Alexandre d'Aspremont, Laurent El Ghaoui Edit this on Wikidata


Publication date: 26 April 2016

Published in: International Series in Operations Research & Management Science (Search for Journal in Brave)

Abstract: Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. Unfortunately, this problem is also combinatorially hard and we discuss convex relaxation techniques that efficiently produce good approximate solutions. We then describe several algorithms solving these relaxations as well as greedy algorithms that iteratively improve the solution quality. Finally, we illustrate sparse PCA in several applications, ranging from senate voting and finance to news data.


Full work available at URL: https://arxiv.org/abs/1011.3781




Recommendations




Cites Work


Cited In (30)





This page was built for publication: Sparse PCA: convex relaxations, algorithms and applications

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2802550)