Agreement between two independent groups of raters
From MaRDI portal
Publication:1036144
DOI10.1007/s11336-009-9116-1zbMath1272.62135WikidataQ63435722 ScholiaQ63435722MaRDI QIDQ1036144
Adelin Albert, Sophie Vanbelle
Publication date: 5 November 2009
Published in: Psychometrika (Search for Journal in Brave)
Full work available at URL: http://orbi.ulg.ac.be/handle/2268/40108
62P15: Applications of statistics to psychology
Related Items
Weighted kappa as a function of unweighted kappas, Testing for concordance between several criteria, Methods of assessing categorical agreement between correlated screening tests in clinical studies, Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa, Some paradoxical results for the quadratically weighted kappa, Bayesian testing of agreement criteria under order constraints, Cohen's kappa is a weighted average, A Kraemer-type rescaling that transforms the odds ratio into the weighted kappa coefficient, Agreement between two independent groups of raters, Cohen's linearly weighted kappa is a weighted average, On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores, Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas, A family of multi-rater kappas that can always be increased and decreased by combining categories, The effect of combining categories on Bennett, Alpert and Goldstein's \(S\), Equivalences of weighted kappas for multiple raters, Conditional inequalities between Cohen's kappa and weighted kappas
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- Agreement between two independent groups of raters
- Intergroup diversity and concordance for ranking data: An approach via metrics for permutations
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- A rank test for two group concordance
- Weighted Least-Squares Approach for Comparing Correlated Kappa
- Testing for agreement between two groups of judges
- Modeling kappa for measuring dependent categorical agreement data
- A Simple Method for Estimating a Regression Model for κ Between a Pair of Raters