points of struggle in all five reports, as indicated by the respondents‘ interpretations. Not all
respondents possess the necessary basic knowledge to interpret the reports correctly. For
example, the meaning of the level indicators A–E and I–V was not known by all respondents.
In addition, not all respondents knew the position of the national average within the different
levels. The results suggest that approximately one-quarter of the respondents knew what the
score interval means. Furthermore, it appeared to be unclear to respondents why norms for
groups deviate from the norms for individual pupils.
With regard to the group reports, respondents mostly struggled with interpreting
ability growth as opposed to ability and with signalling negative ability growth. Ability
growth was often interpreted as ability.
With respect to the reports at the pupil level, respondents mostly struggled with
interpreting ability growth as opposed to ability, understanding when a level correction has
taken place, and judging growth using ability. When judging growth, strikingly few people
used the score interval.
Next, it was explored whether there were differences between the various user groups
with respect to the particular reports. Figure 5.1 shows the average proportion correct (P‘-
value) for each item belonging to a certain report, plotted for each user group.
On the x-axis, the numbers of the items as they appeared in the questionnaire are
depicted. For a clear communication of the results, the items have been ordered based on the
report to which they belong. The pupil report and alternative pupil report compromise the
level of the pupil, the group report, ability growth, and trend analysis compromise the level of
Interpretation of the Score Reports from the Computer Program LOVS by Teachers, Internal Support
Teachers, and Principals
the group. Items 2, 3, 5, 6, and 14 measure knowledge; the other items measure understanding