Assessing the reliability of ordered categorical scales using kappa-type statistics
DOI10.1191/0962280205SM413OAzbMATH Open1122.62385OpenAlexW2100888218WikidataQ81400717 ScholiaQ81400717MaRDI QIDQ5424971FDOQ5424971
Authors: Chris Roberts, Roseanne McNamee
Publication date: 7 November 2007
Published in: Statistical Methods in Medical Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1191/0962280205sm413oa
Recommendations
- An alternative interpretation of the linearly weighted kappa coefficients for ordinal data
- Measuring inter-rater agreement: how useful is the kappa statistic
- A new interpretation of the weighted kappa coefficients
- Chance-corrected measures of reliability and validity in K K tables
- Weighted kappas for \(3 \times 3\) tables
Applications of statistics to biology and medical sciences; meta analysis (62P10) Applications of statistics to psychology (62P15)
Cites Work
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- The Measurement of Observer Agreement for Categorical Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Statistical description of interrater variability in ordinal ratings
- Title not available (Why is that?)
- 2 x 2 Kappa Coefficients: Measures of Agreement or Association
- Inference procedures for assessing interobserver agreement among multiple raters
- Analysis of Nonagreements among Multiple Raters
- Assessing Interrater Agreement from Dependent Data
- Measuring Pairwise Agreement Among Many Observers. II. Some Improvements and Additions
This page was built for publication: Assessing the reliability of ordered categorical scales using kappa-type statistics
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5424971)