James-Stein shrinkage to improve \(k\)-means cluster analysis
From MaRDI portal
Publication:2445666
DOI10.1016/j.csda.2010.03.018zbMath1284.62374OpenAlexW2051891923MaRDI QIDQ2445666
Jinxin Gao, David B. Hitchcock
Publication date: 14 April 2014
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.csda.2010.03.018
Related Items
A Note on the Comparison of the Stein Estimator and the James-Stein Estimator, Improving K-means method via shrinkage estimation and LVQ algorithm, Approximate repeated-measures shrinkage, Shrinkage estimation for the mean of the inverse Gaussian population
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Algorithm AS 136: A K-Means Clustering Algorithm
- Selection of variables in cluster analysis: An empirical comparison of eight procedures
- A simple method for screening variables before clustering microarray data
- Smoothing dissimilarities to cluster binary data
- Synthesized clustering: A method for amalgamating alternative clustering bases with differential weighting of variables
- Minimax estimators of the mean of a multivariate normal distribution
- Trimmed \(k\)-means: An attempt to robustify quantizers
- Optimal variable weighting for ultrametric and additive trees and \(K\)-means partitioning: Methods and software.
- A variable-selection heuristic for K-means clustering
- Robust Linear Clustering
- Robust Estimation in the Normal Mixture Model Based on Robust Clustering
- Multivariate Clustering Procedures with Variable Metrics
- The effect of pre-smoothing functional data on cluster analysis
- Data Clustering: Theory, Algorithms, and Applications