د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

book B65

report communicates (Ryan, 2006). Hattie (2009) argues that, recently, the user has
increasingly been held responsible for a correct interpretation of the test results. He advocates
that test developers should pay more attention to the design of their reports. According to
Hattie, this is necessary in order to make sure that the users interpret the test results as the test
developer intended and then draw adequate inferences and undertake responsible actions.
5.3. Method
5.3.1 Exploration and Scope Refinement
In order to explore the problem, a group of experts was consulted. These experts
comprised educational advisers, trainers, and researchers who often come into contact with
users of the Computer Program LOVS. The experts were asked which (aspects of the) reports
caused users to struggle. The experts were approached through e-mail, and their responses
were discussed in face-to-face meetings and/or in telephone conversations. Furthermore, a
researcher attended two training sessions with educational professionals in order to gain
insight into the nature of the problem. From this exploration, five reports generated by the
Computer Program LOVS were selected for the study: The pupil report, the group overview
(one test-taking moment), ability growth, trend analysis, and alternative pupil report. These
five reports were chosen based on the frequency with which they have been used within
schools and the degree to which the reports are interpreted incorrectly (based on the experts‘
experience).
In this study, data about user interpretations were collected using multiple methods.
Focus groups were formed at two different schools. These groups consisted of teachers,
internal support teachers, school principals, and other school personnel. Furthermore, the
interpretation ability of a group of users was measured using a questionnaire. A multi-method
design was chosen for multiple reasons. First, the data from the focus group meetings were
used to validate the plausibility of the answering options in the questionnaire. Thus, the
qualitative data helped to develop the quantitative instrument. Furthermore, qualitative data
from the focus group meetings could lead to in-depth insights into the results found in the
questionnaire data with respect to why certain aspects of the reports may be interpreted
incorrectly and what possible solutions could be applied to these misinterpretations.
After the second of two rounds of consultations with the experts, the underlying skills
necessary for interpreting the score reports were chosen and then mapped into a test
specification grid. With regard to knowledge, the following aspects were distinguished:
 knowing the meaning of the level indicators (A–E and I–V);
 knowing the position of the national average within the different levels;
 knowing the meaning of the score interval around the ability; and
 knowing that the norms for groups differ from those for individual pupils with regard
to the level indicators.
With respect to interpretation, the following aspects were distinguished:
 judging growth based on ability and signalling negative growth;
 understanding to which level a trend is referring;

الوقت من ذهب

اذكر الله


المصحف الالكتروني