Agreement

Agreement for dichotomous outcomes

This document describes the use of the Agree package for the data example that was used in the paper on specific agreement on dichotomous outcomes in the situation of more than two raters.

Agreement for polytomous outcomes

Ordinal data example Agreement table Agreement Specific agreement Conditional probability Weighted agreement nominal data example Agreement table Agreement Specific agreement Conditional agreement References This document describes the use of the Agree package for two data examples that are used in the paper on specific agreement on polytomous outcomes in the situation of more than two raters (de Vet, Mullender, and Eekhout 2018).

Rater agreement & reliability

In studies where more than one rater gives a judgement on a certain characteristic, the agreement between the judgements is of interest. Historically, mostly a kappa statistic is used to assess the agreement. However, the kappa statistic is a reliability measure instead of an agreement measure. It is more informative to use the percentage of absolute agreement instead. The reliability of ratings can also be obtained via different methods. The choice between the ICC oneway, ICC consistency and ICC agreement depends on the study design and goals.