Fast rates for empirical vector quantization
From MaRDI portal
Abstract: We consider the rate of convergence of the expected loss of empirically optimal vector quantizers. Earlier results show that the mean-squared expected distortion for any fixed distribution supported on a bounded set and satisfying some regularity conditions decreases at the rate O(log n/n). We prove that this rate is actually O(1/n). Although these conditions are hard to check, we show that well-polarized distributions with continuous densities supported on a bounded set are included in the scope of this result.
Recommendations
- Individual Convergence Rates in Empirical Vector Quantizer Design
- Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
- Learning Theory
- Improved Minimax Bounds on the Test and Training Distortion of Empirically Designed Vector Quantizers
- Nonasymptotic bounds for vector quantization in Hilbert spaces
Cites work
- scientific article; zbMATH DE number 467196 (Why is no real title available?)
- scientific article; zbMATH DE number 3795074 (Why is no real title available?)
- A Bennett concentration inequality and its application to suprema of empirical processes
- A central limit theorem for k-means clustering
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Foundations of quantization for probability distributions
- Improved Minimax Bounds on the Test and Training Distortion of Empirically Designed Vector Quantizers
- Individual Convergence Rates in Empirical Vector Quantizer Design
- Integrals on a moving manifold and geometrical probability
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- On Hölder fields clustering
- On the Performance of Clustering in Hilbert Spaces
- On the amount of statistical side information required for lossy data compression
- Principal points and self-consistent points of symmetric multivariate distributions
- Quantization and clustering with Bregman divergences
- Quantization and the method of<tex>k</tex>-means
- Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
- Risk bounds for statistical learning
- Some limit theorems for empirical processes (with discussion)
- Statistical performance of support vector machines
- Strong consistency of k-means clustering
- The minimax distortion redundancy in empirical quantizer design
Cited in
(20)- Empirical risk minimization for heavy-tailed losses
- Vector quantization and clustering in the presence of censoring
- Nonasymptotic bounds for vector quantization in Hilbert spaces
- scientific article; zbMATH DE number 1695005 (Why is no real title available?)
- Introduction to vector quantization and its applications for numerics
- Bandwidth selection in kernel empirical risk minimization via the gradient
- Compressive statistical learning with random feature moments
- Statistical learning guarantees for compressive clustering and compressive mixture modeling
- Fast rates by transferring from auxiliary hypotheses
- New approach to greedy vector quantization
- Individual Convergence Rates in Empirical Vector Quantizer Design
- The high resolution vector quantization problem with Orlicz norm distortion
- Scalar versus vector quantization: worst case analysis
- On strong consistency of kernel \(k\)-means: a Rademacher complexity approach
- Greedy vector quantization
- Also for \(k\)-means: more data does not imply better performance
- High-rate vector quantization for detection
- Dimensionality-dependent generalization bounds for \(k\)-dimensional coding schemes
- Robust \(k\)-means clustering for distributions with two moments
- Multi-scale vector quantization with reconstruction trees
This page was built for publication: Fast rates for empirical vector quantization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q351685)