د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B80

6.1.4 The Need for Professional Development in DDDM
In the last few years, it has become increasingly clear that implementing DDDM is a
challenging undertaking. Moreover, various professional development initiatives have been
established that aim to make school staff proficient in using data from pupil-monitoring
systems (e.g., Staman, Visscher, & Luyten 2013). The idea behind such initiatives is that a
certain degree of ―assessment literacy‖ is needed for interpreting test results (Earl & Fullan,
2003; Vanhoof, Verhaeghe, Verhaeghe, Valcke, & Van Petegem, 2011; Verhaeghe, 2011).
―Assessment literacy refers to the capacity of teachers – alone and together – (a) to examine
and accurately understand student work and performance data, and correspondingly, (b) to
develop classroom, and school plans to alter conditions necessary to achieve better results‖
(Fullan & Watson, 2000, p. 457). From this definition, it can be reasoned that the lack of an
accurate understanding of data about student learning directly affects a person‘s subsequent
actions. Thus, a correct interpretation of data about student learning is a necessary
precondition for successfully implementing DDDM.
Professional development on the interpretation and use of pupil-monitoring system
data seems necessary in the Dutch context, especially given the lack of formal requirements in
past and current pre-service teacher education programmes (Van der Kleij & Eggen, 2013).
Results from recent research (Staman et al., 2013), for example, show that school staff can
benefit from a long-term, intensive, school-wide training programme. Nevertheless, it appears
that even after such a rigorous programme, users‘ interpretations are not free of errors.
Conversely, clear score reports can contribute to the accuracy of users‘ interpretations.
Moreover, test developers are to a certain extent responsible for providing score reports that
support users in making correct interpretations (Hattie, 2009; Ryan, 2006; Zenisky &
Hambleton, 2012).
6.1.5 Aims of the Present Study
This study aims to set an example for design research (McKenney & Reeves, 2012) in
the area of score reporting. Research has been conducted in the context of the reports
generated by the Computer Program LOVS. This study focused on five reports, two at the
pupil level, and three at the group level. The reports at the pupil level are to be used to
monitor individual progress and to signal abnormalities. However, some abnormalities for
example, stagnation in the learning curve, are not explained by the reports, but will have to be
examined in the analysis phase. For this purpose, additional reports that allow for specific
error analyses are available. The three reports at the group level are intended to be used for
internal evaluation purposes at the levels of the class and/or the school.
This study investigated how the reports from the Computer Program LOVS can be
redesigned to support users in interpreting pupils‘ test results. The aims of this study were
twofold, as is typical for design research (McKenney & Reeves, 2012). First, solve a problem
in practice, i.e., users, particularly teachers, seem to experience difficulties in interpreting the
reports generated by the Computer Program LOVS. Second, contribute to the theoretical
understanding regarding score reporting.

الوقت من ذهب

اذكر الله


المصحف الالكتروني