Missing data occurs in many empirical studies. It is vital for study results to handle the missing data correctly. The best solution to deal with missing data depends on the reasons for the occurrence of missing data and on the analysis that is planned. In the project a guide was developed to find the best way to deal with missing data in multi-item questionnaires. The website www.missingdata.nl also provides a lot of information about missing data and methodology.
Children learn to walk, speak, and think at an astonishing pace. The D-score captures this process as a one-number summary. Application of the D-score enables comparisons in child development across populations, groups and individuals.
In studies where more than one rater gives a judgement on a certain characteristic, the agreement between the judgements is of interest. Historically, mostly a kappa statistic is used to assess the agreement. However, the kappa statistic is a reliability measure instead of an agreement measure. It is more informative to use the percentage of absolute agreement instead. The reliability of ratings can also be obtained via different methods. The choice between the ICC oneway, ICC consistency and ICC agreement depends on the study design and goals.