Rater agreement & reliability
In studies where more than one rater gives a judgement on a certain characteristic, the agreement between the judgements is of interest. Historically, mostly a kappa statistic is used to assess the agreement. However, the kappa statistic is a reliability measure instead of an agreement measure. It is more informative to use the percentage of absolute agreement instead. Besides the overall agreement, the specific agreement can also be obtained.
The Agree package, available via GitHub, has functions implemented to obtain the agreement and specific agreement between multiple raters. This can be done for dichotomous variables, but also for (ordinal) polytomous items. In the sections “Dichotomous agreement” and “Polytomous agreement” the description of the functions in the Agree package are demonstrated.