Statistical description of interrater variability in ordinal ratings
From MaRDI portal
Publication:5424028
DOI10.1177/096228020000900505zbMath1121.62644MaRDI QIDQ5424028
Jennifer C. Nelson, Margaret S. Pepe
Publication date: 1 November 2007
Published in: Statistical Methods in Medical Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1177/096228020000900505
62P10: Applications of statistics to biology and medical sciences; meta analysis
Related Items
Chance-corrected measures of reliability and validity in K K tables, Assessing the reliability of ordered categorical scales using kappa-type statistics, On population‐based measures of agreement for binary classifications, A formal proof of a paradox associated with Cohen's kappa, Cohen's kappa is a weighted average, Visualising concordance, Cohen's linearly weighted kappa is a weighted average, On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores, Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas, A family of multi-rater kappas that can always be increased and decreased by combining categories, Equivalences of weighted kappas for multiple raters
Cites Work
- Unnamed Item
- General Observer-Agreement Measures on Individual Subjects and Groups of Subjects
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- A Model for Agreement Between Ratings on an Ordinal Scale
- Extension of the Kappa Coefficient
- Measuring Agreement for Multinomial Data
- The Measurement of Observer Agreement for Categorical Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Beyond kappa: A review of interrater agreement measures
- Assessing Interrater Agreement from Dependent Data