Statistical description of interrater variability in ordinal ratings
From MaRDI portal
Publication:5424028
DOI10.1177/096228020000900505zbMath1121.62644OpenAlexW2000376841MaRDI QIDQ5424028
Jennifer C. Nelson, Margaret S. Pepe
Publication date: 1 November 2007
Published in: Statistical Methods in Medical Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1177/096228020000900505
Related Items (13)
A family of multi-rater kappas that can always be increased and decreased by combining categories ⋮ Equivalences of weighted kappas for multiple raters ⋮ How Robust Are Multirater Interrater Reliability Indices to Changes in Frequency Distribution? ⋮ A formal proof of a paradox associated with Cohen's kappa ⋮ Multi-rater delta: extending the delta nominal measure of agreement between two raters to many raters ⋮ On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores ⋮ Visualising concordance ⋮ Cohen's linearly weighted kappa is a weighted average ⋮ Chance-corrected measures of reliability and validity in K K tables ⋮ Assessing the reliability of ordered categorical scales using kappa-type statistics ⋮ Cohen's kappa is a weighted average ⋮ Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas ⋮ On population‐based measures of agreement for binary classifications
Cites Work
- Unnamed Item
- General Observer-Agreement Measures on Individual Subjects and Groups of Subjects
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- A Model for Agreement Between Ratings on an Ordinal Scale
- Extension of the Kappa Coefficient
- Measuring Agreement for Multinomial Data
- The Measurement of Observer Agreement for Categorical Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Beyond kappa: A review of interrater agreement measures
- Assessing Interrater Agreement from Dependent Data
This page was built for publication: Statistical description of interrater variability in ordinal ratings