د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

book B74

questionnaire suggest that with regard to the reports at the group level, respondents mostly
struggled with interpreting growth in ability as opposed to interpreting ability and signalling
negative ability growth. The growth in ability was often interpreted as the ability level. With
respect to the reports at the pupil level, respondents mostly struggled with the interpretation of
growth in ability as opposed to ability, understanding when a level correction has taken place,
and the interpretation of growth in ability. When interpreting growth in ability, strikingly few
people used the score interval. The results of the focus group meetings are fairly consistent
with the results found in the questionnaire with respect to the stumbling blocks in the
interpretation of the reports. The results suggest that a number of aspects within the reports
caused confusion or faulty interpretations. For example, the use of symbols and colours was
not always clear and unambiguous. It also appeared that the indications of the axes in the
graphs were not always complete. The concept of score interval appeared to be difficult for
focus group participants to understand. Not surprisingly, the score interval was not used in
practice by focus group participants. Previous research (Hambleton & Slater, 1997; Zenisky
& Hambleton, 2012) on the interpretation of score reports already indicated that statistical
concepts related to confidence levels are often ignored by users of the reports because users
do not find them meaningful. There appears to be a conflict between the standards for score
reports (AERA et al., 1999), which prescribe that confidence levels should be reported, and
the data literacy of those who use these reports. One could question the usefulness of
reporting confidence levels when they are neither understood nor used according to the test
developer‘s intention.
In this study, the possible influences of various variables were explored. Whether or
not a respondent had received training in the use of the Computer Program LOVS appeared
not to be related to their interpretation ability. However, we did find a substantial and
significant difference between the three groups with regard to having received training in the
use of the Computer Program LOVS. Strikingly, only 5% of the teachers had received
training. This is alarming given that the entire school team is expected to evaluate the
education based on test results (Ministry of Education, Culture, and Science, 2010) and the
limited attention that is currently paid to assessment literacy in teacher pre-service
programmes. Neither was a relationship found between the number of years of experience
using the Computer Program LOVS and interpretation ability. However, in order to make
substantial claims about the effects of training and experience, additional research is needed.
In this study, for example, which training the respondent had followed was not measured nor
was the duration or intensity of this training. However, various researchers have emphasised
the need for good support with regard to the use of data feedback in schools (Schildkamp &
Teddlie, 2008; Schildkamp, Van Petegem & Vanhoof, 2007; Verhaeghe et al., 2010; Visscher
& Coe, 2003; Visscher, & Luyten, 2009; Zupanc et al., 2009). It would be worthwhile to
study the effects of professional development on the interpretation and use of data feedback.
For example, recent research (Staman, Visscher, & Luyten, 2013) suggests that teachers can
benefit much from an intensive schoolwide training programme in DDDM, focusing on,
amongst other things, the interpretation of test results.

الوقت من ذهب

اذكر الله


المصحف الالكتروني