د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

book B63

not specific features of the score reports cause educators to struggle, in which case,
appropriate adaptations can be made. Given the fact that the contents of the score reports can
be directly manipulated by the test developers, it seemed appropriate to conduct an empirical
study in order to investigate whether the score reports from the Computer Program LOVS
could be improved.
The purpose of this study is to (a) investigate the extent to which the reports from the
Computer Program LOVS are correctly interpreted by teachers, internal support teachers, and
school principals and (b) identify stumbling blocks for teachers, internal support teachers, and
principals when interpreting reports from the Computer Program LOVS. Furthermore, the
study aims to explore the possible influences of various variables that seem relevant given the
literature (e.g., Earl & Fullan, 2003; Meijer et al., 2011; Vanhoof et al., 2009). These
variables are training in the use of the Computer Program LOVS (Ward et al., 2003), the
number of years of experience using the Computer Program LOVS (Meijer et al., 2011), the
degree to which the information from the Computer Program LOVS is perceived as useful
(Vanhoof et al., 2009; Verhaeghe et al., 2011; Zupanc et al., 2009), and users‘ estimates of
their own ability to use quantitative test data (Earl & Fullan, 2003;Williams & Coles, 2007).
5.2. Theoretical Framework
5.2.1 The Use of Data Feedback
The test results from pupil-monitoring systems provide users with feedback about
pupil performance. This is called data feedback. This feedback is intended to close the gap
between a pupil‘s current performance and the intended learning outcomes (Hattie &
Timperley, 2007). Various studies suggest that the actual use of feedback about pupil
performance within the school is limited. A possible explanation for the lack of feedback use
can be found in the characteristics of the SPFS (Earl & Fullan, 2003; Schildkamp & Kuiper,
2010; Schildkamp & Visscher, 2009; Verhaeghe, Vanhoof, Valcke, & Van Petegem, 2010;
Visscher & Coe, 2002). More specifically, in the Dutch context, it can be concluded that the
use of data feedback by teachers in primary education is limited (Ledoux et al., 2009; Meijer
et al., 2011), although research has suggested that Dutch schools possess sufficient data
feedback (Ministry of Education, Culture, and Science, 2010). Visscher (2002) has identified
several factors that influence the use of data feedback within schools: The design process and
characteristics of the SPFS, characteristics of the feedback report, and the implementation
process and organisational features of the school. This study focuses on the characteristics of
the feedback report.
With regard to the use of data feedback from pupil-monitoring systems, various types
of uses can be distinguished. A distinction can be made between the instrumental use and the
conceptual use of the test results (Rossi, Freeman, & Lipsey, 1999; Weiss, 1998). The
instrumental use compromises the direct use of findings to take actions were needed.
The major form of instrumental use of data feedback from pupil-monitoring systems is
the instructional use. The conceptual use encompasses the impact test results can have on the
way educators think about certain issues. Visscher (2001) distinguishes an additional type of
data use, namely the strategic use of data feedback. This type of use includes all sorts of
unintended uses of data feedback for strategic purposes, such as teaching to the test or letting

الوقت من ذهب

اذكر الله


المصحف الالكتروني