Equivalences of weighted kappas for multiple raters
DOI10.1016/J.STAMET.2011.11.001zbMATH Open1365.62216OpenAlexW2003055974MaRDI QIDQ2360892FDOQ2360892
Publication date: 29 June 2017
Published in: Statistical Methodology (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.stamet.2011.11.001
Cohen's kappaCohen's weighted kappainter-rater reliabilityordinal agreementMielke, Berry and Johnston's weighted kappa\(g\)-agreementHubert's kappamultiple raters
Contingency tables (62H17) Measures of association (correlation, canonical correlation, etc.) (62H20)
Cites Work
- Measuring Agreement for Multinomial Data
- The Measurement of Observer Agreement for Categorical Data
- Beyond kappa: A review of interrater agreement measures
- Title not available (Why is that?)
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Triadic distance models: axiomatization and least squares representation
- A comparison of the multidimensional scaling of triadic and dyadic distances
- \(n\)-way metrics
- \(k\)-adic similarity coefficients for binary (presence/absence) data
- On similarity coefficients for \(2\times2\) tables and correction for chance
- On multi-way metricity, minimality and diagonal planes
- A note on the linearly weighted kappa coefficient for ordinal scales
- On the equivalence of Cohen's kappa and the Hubert-Arabie adjusted Rand index
- Inequalities between kappa and kappa-like statistics for \(k\times k\) tables
- Agreement between two independent groups of raters
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Dispersion-weighted kappa: an integrative framework for metric and nominal scale agreement coefficients
- Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas
- A family of multi-rater kappas that can always be increased and decreased by combining categories
- Inequalities between multi-rater kappas
- A formal proof of a paradox associated with Cohen's kappa
- Some paradoxical results for the quadratically weighted kappa
- Title not available (Why is that?)
- Statistical description of interrater variability in ordinal ratings
- Cohen's kappa can always be increased and decreased by combining categories
- Weighted kappa is higher than Cohen's kappa for tridiagonal agreement tables
- Cohen's kappa is a weighted average
Cited In (6)
- Analysis of the weighted kappa and its maximum with Markov moves
- Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa
- The dependence of chance-corrected weighted agreement coefficients on the power parameter of the weighting scheme: analysis and measurement
- A comparison of reliability coefficients for ordinal rating scales
- On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores
- Cohen's weighted kappa with additive weights
This page was built for publication: Equivalences of weighted kappas for multiple raters
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2360892)