Smoothed analysis of tensor decompositions
From MaRDI portal
Publication:5259595
DOI10.1145/2591796.2591881zbMATH Open1315.68203arXiv1311.3651OpenAlexW2057214765MaRDI QIDQ5259595FDOQ5259595
Aditya Bhaskara, Aravindan Vijayaraghavan, Ankur Moitra, Moses Charikar
Publication date: 26 June 2015
Published in: Proceedings of the forty-sixth annual ACM symposium on Theory of computing (Search for Journal in Brave)
Abstract: Low rank tensor decompositions are a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error -- a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, our main contribution is in analyzing their stability in the framework of smoothed analysis. Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods to learning problems in the smoothed setting. In particular, we use it to obtain results for learning multi-view models and mixtures of axis-aligned Gaussians where there are many more "components" than dimensions. The assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.
Full work available at URL: https://arxiv.org/abs/1311.3651
Learning and adaptive systems in artificial intelligence (68T05) Multilinear algebra, tensor calculus (15A69)
Cites Work
- Theory of Cryptography
- The geometry of differential privacy: the small database and approximate cases
- On the geometry of differential privacy
- Interactive privacy via the median mechanism
- The price of privately releasing contingency tables and the spectra of random matrices with correlated rows
- Lower Bounds in Differential Privacy
- Iterative Constructions and Private Data Release
- Differential Privacy and the Fat-Shattering Dimension of Linear Queries
- Our Data, Ourselves: Privacy Via Distributed Noise Generation
- On the complexity of differentially private data release
- Title not available (Why is that?)
- Collusion-secure fingerprinting for digital data
- Advances in Cryptology – CRYPTO 2004
- Title not available (Why is that?)
- New Efficient Attacks on Statistical Disclosure Control Mechanisms
- Bounds on the Sample Complexity for Private Learning and Private Data Release
- Answering n {2+o(1)} counting queries with differential privacy is hard
- Characterizing the sample complexity of private learners
- Faster Algorithms for Privately Releasing Marginals
- Faster private release of marginals on small databases
- Private Learning and Sanitization: Pure vs. Approximate Differential Privacy
- Privately releasing conjunctions and the statistical query barrier
- Efficient algorithms for privately releasing marginals via convex relaxations
Cited In (20)
- On the optimization landscape of tensor decompositions
- Composition-aware spectroscopic tomography
- An operator theoretic approach to nonparametric mixture models
- Recovering Structured Probability Matrices
- A general framework for tensor screening through smoothing
- Smoothed analysis for tensor methods in unsupervised learning
- Homotopy techniques for tensor decomposition and perfect identifiability
- Title not available (Why is that?)
- Absolute reconstruction for sums of powers of linear forms: degree 3 and beyond
- Polynomial Learning of Distribution Families
- Provable ICA with unknown Gaussian noise, and implications for Gaussian mixtures and autoencoders
- Universality of the least singular value for the sum of random matrices
- Complete decomposition of symmetric tensors in linear time and polylogarithmic precision
- Concentration inequalities for random tensors
- Planning and learning in partially observable systems via filter stability
- Overcomplete Order-3 Tensor Decomposition, Blind Deconvolution, and Gaussian Mixture Models
- Title not available (Why is that?)
- Effective Criteria for Specific Identifiability of Tensors and Forms
- Speeding up random walk mixing by starting from a uniform vertex
- Title not available (Why is that?)
Uses Software
This page was built for publication: Smoothed analysis of tensor decompositions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5259595)