Fast rates for empirical vector quantization
From MaRDI portal
Publication:351685
DOI10.1214/13-EJS822zbMATH Open1349.62038arXiv1201.6052MaRDI QIDQ351685FDOQ351685
Publication date: 9 July 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Abstract: We consider the rate of convergence of the expected loss of empirically optimal vector quantizers. Earlier results show that the mean-squared expected distortion for any fixed distribution supported on a bounded set and satisfying some regularity conditions decreases at the rate O(log n/n). We prove that this rate is actually O(1/n). Although these conditions are hard to check, we show that well-polarized distributions with continuous densities supported on a bounded set are included in the scope of this result.
Full work available at URL: https://arxiv.org/abs/1201.6052
Recommendations
- Individual Convergence Rates in Empirical Vector Quantizer Design
- Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
- Learning Theory
- Improved Minimax Bounds on the Test and Training Distortion of Empirically Designed Vector Quantizers
- Nonasymptotic bounds for vector quantization in Hilbert spaces
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Approximations to statistical distributions (nonasymptotic) (62E17)
Cites Work
- Strong consistency of k-means clustering
- Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
- A Bennett concentration inequality and its application to suprema of empirical processes
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Foundations of quantization for probability distributions
- Some limit theorems for empirical processes (with discussion)
- A central limit theorem for k-means clustering
- Statistical performance of support vector machines
- Title not available (Why is that?)
- Title not available (Why is that?)
- Principal points and self-consistent points of symmetric multivariate distributions
- On Hölder fields clustering
- Improved Minimax Bounds on the Test and Training Distortion of Empirically Designed Vector Quantizers
- Individual Convergence Rates in Empirical Vector Quantizer Design
- On the Performance of Clustering in Hilbert Spaces
- Quantization and the method of<tex>k</tex>-means
- Integrals on a moving manifold and geometrical probability
- On the amount of statistical side information required for lossy data compression
- The minimax distortion redundancy in empirical quantizer design
- Risk bounds for statistical learning
- Quantization and clustering with Bregman divergences
Cited In (12)
- Nonasymptotic bounds for vector quantization in Hilbert spaces
- Compressive statistical learning with random feature moments
- Statistical learning guarantees for compressive clustering and compressive mixture modeling
- Robust \(k\)-means clustering for distributions with two moments
- Scalar versus vector quantization: worst case analysis
- Title not available (Why is that?)
- Empirical risk minimization for heavy-tailed losses
- On strong consistency of kernel \(k\)-means: a Rademacher complexity approach
- Bandwidth selection in kernel empirical risk minimization via the gradient
- Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes
- Fast rates by transferring from auxiliary hypotheses
- Also for \(k\)-means: more data does not imply better performance
This page was built for publication: Fast rates for empirical vector quantization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q351685)