Assessing the reliability of ordered categorical scales using kappa-type statistics
From MaRDI portal
Publication:5424971
Recommendations
- An alternative interpretation of the linearly weighted kappa coefficients for ordinal data
- Measuring inter-rater agreement: how useful is the kappa statistic
- A new interpretation of the weighted kappa coefficients
- Chance-corrected measures of reliability and validity in K K tables
- Weighted kappas for \(3 \times 3\) tables
Cites work
- scientific article; zbMATH DE number 49733 (Why is no real title available?)
- 2 x 2 Kappa Coefficients: Measures of Agreement or Association
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Analysis of Nonagreements among Multiple Raters
- Assessing Interrater Agreement from Dependent Data
- Inference procedures for assessing interobserver agreement among multiple raters
- Measuring Pairwise Agreement Among Many Observers. II. Some Improvements and Additions
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Statistical description of interrater variability in ordinal ratings
- The Measurement of Observer Agreement for Categorical Data
This page was built for publication: Assessing the reliability of ordered categorical scales using kappa-type statistics
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5424971)