The intraclass correlations (ICC) of agreement for rater reliability using the variance estimates from a linear mixed model. The function returns the icc, standard error of measurment (sem) and confidence intervals for icc.

icc_agreement(
  data,
  cols = colnames(data),
  alpha = 0.05,
  CI_estimator = "exact"
)

Arguments

data

data.frame with a column for each observer/rater and a row per rated subject.

cols

character vector with the column names to be used as observers. Default is `cols = colnames(data)`.

alpha

confidence interval level, default `alpha = 0.05`.

CI_estimator

character "exact" or "approx" to switch between using an exact F-test or an approximated estimate. The latter accounts for the three independent variance components. Default is `CI_estimator = "exact"`.

Value

list

Details

The icc type agreement is the variance between the subjects divided by the sum of the subject variance, rater variance and the residual variance. The ICC for agreement generalizes to other raters within a population (Shrout & Fleiss, 1979). The `varcomp()` function is used to compute the variances. The variance components are estimated from a `lmer` model with a random slope for the subjects as well as for the raters. The sem is the square root of the sum of the rater variance and the error variance. The confidence intervals are approximated to account for the three independent variance components, as defined by Satterthwaite (1946) & Fleiss and Shrout (1978).

References

Fleiss, J. L., & Shrout, P. E. Approximate interval estimation for a certain intraclass correlation coefficient. Psychometrika, 1978, 43, 259-262. Satterthwaite, F. E. An approximate distribution of estimates of variance components. Biometrics, 1946, 2, 110-114. Shrout, P.E. & Fleiss, J.L. (1979) Intraclass Correlations: Uses in Assessing Rater Reliability. Psychological Bulletin, 87(2), 420-428.

Author

Iris Eekhout