Agreement between two independent groups of raters
From MaRDI portal
Publication:1036144
DOI10.1007/s11336-009-9116-1zbMath1272.62135OpenAlexW2112844397WikidataQ63435722 ScholiaQ63435722MaRDI QIDQ1036144
Adelin Albert, Sophie Vanbelle
Publication date: 5 November 2009
Published in: Psychometrika (Search for Journal in Brave)
Full work available at URL: http://orbi.ulg.ac.be/handle/2268/40108
Related Items (17)
A family of multi-rater kappas that can always be increased and decreased by combining categories ⋮ The effect of combining categories on Bennett, Alpert and Goldstein's \(S\) ⋮ Equivalences of weighted kappas for multiple raters ⋮ Conditional inequalities between Cohen's kappa and weighted kappas ⋮ Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa ⋮ Association and causation: attributes and effects of judges in equal employment opportunity commission litigation outcomes ⋮ Some paradoxical results for the quadratically weighted kappa ⋮ On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores ⋮ Bayesian testing of agreement criteria under order constraints ⋮ Cohen's linearly weighted kappa is a weighted average ⋮ Cohen's kappa is a weighted average ⋮ A Kraemer-type rescaling that transforms the odds ratio into the weighted kappa coefficient ⋮ Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas ⋮ Testing for concordance between several criteria ⋮ Weighted kappa as a function of unweighted kappas ⋮ Agreement between two independent groups of raters ⋮ Methods of assessing categorical agreement between correlated screening tests in clinical studies
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Measuring pairwise interobserver agreement when all subjects are judged by the same observers
- Agreement between two independent groups of raters
- Intergroup diversity and concordance for ranking data: An approach via metrics for permutations
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- A rank test for two group concordance
- Weighted Least-Squares Approach for Comparing Correlated Kappa
- Testing for agreement between two groups of judges
- Modeling kappa for measuring dependent categorical agreement data
- A Simple Method for Estimating a Regression Model for κ Between a Pair of Raters
This page was built for publication: Agreement between two independent groups of raters