Rater agreement & reliability

Photo by fauxels from Pexels

In studies where more than one rater gives a judgement on a certain characteristic, the agreement between the judgements is of interest. Historically, mostly a kappa statistic is used to assess the agreement. However, the kappa statistic is a reliability measure instead of an agreement measure. It is more informative to use the percentage of absolute agreement instead. Besides the overall agreement, the specific agreement can also be obtained.

The Agree package, available via GitHub, has functions implemented to obtain the agreement and specific agreement between multiple raters. This can be done for dichotomous variables, but also for (ordinal) polytomous items. In the sections “Dichotomous agreement” and “Polytomous agreement” the description of the functions in the Agree package are demonstrated.

Iris Eekhout, PhD
Iris Eekhout, PhD
Statistician

Iris works on a variety of projects as methodologist and statistical analyst related to child health, e.g. measuring child development (D-score) and adaptive screenings for psycho-social problems (psycat).

Related