د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B62

LOVS is known as a pupil-monitoring system that uses advanced psychometric
techniques, which results in reliable and valid outcomes about pupil ability. However,
whenever users draw incorrect inferences, the validity of the test scores is negatively affected.
Being able to correctly interpret pupils‘ test results is a precondition for the optimal use of the
Computer Program LOVS. Besides the above-mentioned lack of knowledge amongst school
staff, it has been suggested that many teachers are uncertain about their own ability to use data
for quality improvement (e.g., Earl & Fullan, 2003; Williams & Coles, 2007). On the one
hand, there is much to be gained through professional development in regards to the
interpretation and use of data feedback. For example, a study by Ward, Hattie, and Brown
(2003) pointed out that professional development increased correctness in the interpretation of
reports belonging to a pupil-monitoring system and also increased communication about test
results with colleagues, enhanced user confidence, and increased use of the various reports.
On the other hand, clear score reports can support users in making correct interpretations
(Hattie, 2009; Ryan, 2006; Zenisky & Hambleton, 2012). For example, Hattie and Brown
(2008) evaluated whether users of asTTle reports could correctly interpret these reports. The
initial 60% that was correct was not found to be satisfactory. The researchers subsequently
adjusted features of the reports whereupon the percentage correct increased to over 90%.
In the literature, remarkably little attention is paid to the way users (mis)interpret the
score reports. For example, The Standards for Educational and Psychological Testing
(American Educational Research Association [AERA], American Psychological Association
[APA], & National Council on Measurement in Education [NCME], 1999) contain only a few
general standards about score reporting. The possible incorrect or incomplete interpretation of
assessment results is an underexposed but important aspect of formative testing (Bennett,
2011). There is scarce research into the characteristics of feedback reports and the
effectiveness of various methods used for communicating feedback to users (Verhaeghe,
2011). This is problematic, since feedback reports often contain complex graphical
representations and statistical concepts, while users often do not possess statistical skills (Earl
& Fullan, 2003; Kerr et al., 2006; Saunders, 2000; Williams & Coles, 2007).
Reports can serve two purposes (Ryan, 2006). First, they can be instructive by
informing the target group about pupils‘ learning progress and the effectiveness of instruction.
Second, reports can be used to ensure accountability. This study focuses on their instructive
purposes. LOVS primarily aims at informing schools about their own functioning. Recent
research, however, suggests that the instructive use of LOVS reports is limited, and teachers
struggle with interpreting these reports (Meijer et al., 2011). Most notably, various recent
studies suggest that members of the school board (e.g., school principals) have a more
positive attitude towards SPFS than teachers (Vanhoof, Van Petegem, & De Maeyer, 2009;
Verhaeghe, Vanhoof, Valcke, & Van Petegem, 2011; Zupanc et al., 2009). Zenisky and
Hambleton (2012) have recently emphasised that although the body of literature on effective
score reporting is growing, investigations of actual understanding amongst users is needed.
This is also needed as part of ongoing maintenance for reports that have already been
developed or used for a while. Although the body of research on the interpretation of results
from the Computer Program LOVS is growing, user interpretation has not yet been
systematically investigated amongst various user groups. Thus, actually testing users‘
interpretations and discussing the aspects of the reports could provide insight into whether or

الوقت من ذهب

اذكر الله


المصحف الالكتروني