The minimax distortion redundancy in empirical quantizer design
From MaRDI portal
Publication:4701158
DOI10.1109/18.705560zbMath0964.94015OpenAlexW2137216203MaRDI QIDQ4701158
Bartlett, Peter L., Gábor Lugosi, Tamás Linder
Publication date: 21 November 1999
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: http://hdl.handle.net/10230/743
data compressionboundsdesign algorithmvector quantizersexpected distortion redundancymean-squared distortion
Related Items (17)
Convergence of the $k$-Means Minimization Problem using $\Gamma$-Convergence ⋮ A framework for statistical clustering with constant time approximation algorithms for \(K\)-median and \(K\)-means clustering ⋮ Fast rates for empirical vector quantization ⋮ Statistical learning guarantees for compressive clustering and compressive mixture modeling ⋮ A statistical view of clustering performance through the theory of \(U\)-processes ⋮ Empirical risk minimization for heavy-tailed losses ⋮ On Hölder fields clustering ⋮ Consistency of spectral clustering ⋮ A \(k\)-points-based distance for robust geometric inference ⋮ A notion of stability for \(k\)-means clustering ⋮ Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes ⋮ Vector quantization and clustering in the presence of censoring ⋮ A quasi-Bayesian perspective to online clustering ⋮ Robust \(k\)-means clustering for distributions with two moments ⋮ On strong consistency of kernel \(k\)-means: a Rademacher complexity approach ⋮ Nonasymptotic bounds for vector quantization in Hilbert spaces ⋮ Learning Finite-Dimensional Coding Schemes with Nonlinear Reconstruction Maps
This page was built for publication: The minimax distortion redundancy in empirical quantizer design