د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

book B12

12
Barney, 2006; Ledoux, Blok, Boogaard, & Krüger, 2009; Meijer, Ledoux, & Elshof, 2011;
Saunders, 2000; Van Petegem & Vanhoof, 2004; Williams & Coles, 2007; Zupanc, Urank, &
Bren, 2009).
Because data interpretation is necessary to alter conditions to meet pupils‘ needs, it
touches upon one of the basic skills that comprise assessment literacy. Hattie and Brown
(2008) noted that when assessment results are displayed graphically, the need for teachers to
have a high degree of assessment literacy is reduced because they can use their intuition to
interpret the assessment results. O‘Malley, Lai, McClarty, and Way (2013) suggested that
technology could help in communicating assessment results because users can demand the
information that is relevant to them. Thus, current technology makes it possible to generate
reports that are tailored to the needs of particular users. For example, in the USA, data
dashboards have become popular tools that automatically create score reports. Regarding
interpretation and feedback of test results, the International Test Commission (2006)
formulated guidelines specifically aimed at reporting results gathered using computer-based
tests. These guidelines state that various reports should be available for different stakeholders.
An example of computer-based reporting concerns school performance feedback systems
(SPFS), which are systems developed by professional organisations that aim to provide
schools with insight into the outcomes of the education they have provided (Visscher & Coe,
2002). Pupil-monitoring systems, a kind of SPFS, have been developed primarily to monitor
the individual progress of pupils. These systems allow for the automatic generation of reports
at various levels within the school, covering different time spans and test content. These tools
reduce the demands placed upon users in terms of statistical skills because they do not have to
engage in complex statistical analyses. Nevertheless, there is little knowledge about the
degree to which users are capable of correctly interpreting the reported results of assessments,
which is a crucial precondition for DDDM.
1.4 Outline
This dissertation covers three areas: 1) item-based feedback provided to students
through a computer; 2) feedback provided through a computer to educators based on students‘
assessment results; 3) comparison of three approaches to formative assessment: data-based
decision making (DBDM), assessment for learning (AfL), and diagnostic testing (DT).
1.4.1 Item-based Feedback Provided to Students through a Computer
Chapter 2 presents an experiment that has been conducted at a higher education
institute in the Netherlands, focusing on the effects of written feedback in a computer-based
assessment of students‘ learning outcomes. The literature shows conflicting results with
regard to the effects of different ways of providing feedback concerning students‘ learning
outcomes (e.g., Kluger & DeNisi, 1996; Shute, 2008). However, regarding written feedback
in a CBA, generally, positive effects have been reported for EF aimed at the task and process
levels or task and regulation levels. The results with regard to the timing of feedback vary
widely (Mory, 2004). Therefore, in the experiment presented in Chapter 2, it was decided to
compare the effects of KCR + EF to KR as well as the effects of immediate and delayed
feedback. It was expected that students would benefit more from KCR + EF than from KR
only, with respect to learning outcomes. Furthermore, it was investigated whether time spent

الوقت من ذهب

اذكر الله


المصحف الالكتروني