Assessing the reliability of ordered categorical scales using kappa-type statistics
From MaRDI portal
Publication:5424971
DOI10.1191/0962280205sm413oazbMath1122.62385OpenAlexW2100888218WikidataQ81400717 ScholiaQ81400717MaRDI QIDQ5424971
Roseanne McNamee, Chris Roberts
Publication date: 7 November 2007
Published in: Statistical Methods in Medical Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1191/0962280205sm413oa
Applications of statistics to biology and medical sciences; meta analysis (62P10) Applications of statistics to psychology (62P15)
Cites Work
- Unnamed Item
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Inference Procedures for Assessing Interobserver Agreement among Multiple Raters
- 2 x 2 Kappa Coefficients: Measures of Agreement or Association
- Analysis of Nonagreements among Multiple Raters
- Measuring Pairwise Agreement Among Many Observers. II. Some Improvements and Additions
- The Measurement of Observer Agreement for Categorical Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Assessing Interrater Agreement from Dependent Data
- Statistical description of interrater variability in ordinal ratings
This page was built for publication: Assessing the reliability of ordered categorical scales using kappa-type statistics