د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B44

4.1 Introduction
The importance of assessment in the learning process is widely acknowledged,
especially with the growing popularity of the assessment for learning approach (Assessment
Reform Group [ARG], 1999; Stobart, 2008). The role of assessment in the learning process is
crucial. ―It is only through assessment that we can find out whether a particular sequence of
instructional activities has resulted in the intended learning outcomes‖ (Wiliam, 2011, p .3).
Many researchers currently claim formative assessment can have a positive effect on the
learning outcomes of students. However, these claims are not very well grounded, an issue
that has recently been addressed in detail by Bennett (2011), who argued that ―the magnitude
of commonly made quantitative claims for effectiveness is suspect, deriving from untraceable,
flawed, dated, or unpublished resources‖ (p. 5). For example, the source that is most widely
cited with regard to the effects of formative assessment is Black and Wiliam‘s (1998a, 1998b,
1998c) collection of papers. Often, effect sizes of between 0.4 and 0.7 were cited from these
studies, which suggests formative assessment had large positive effects on student
achievement. Bennett argued, however, that the studies involved in their meta-analysis are too
diverse to be expressed in a meaningful overall result. Consequently, an overall effect size for
formative assessment is not very informative. Moreover, the meta-analysis itself has never
been published and therefore could not be criticized. Bennett (2011) called the effect sizes in
Black‘s and Wiliam‘s studies (1998a, 1998b) ―a mischaracterization that has essentially
become the educational equivalent of urban legend‖ (p. 12). Additionally, Bennett argued that
other meta-analyses on formative assessment (e.g., Bloom, 1984; Nyquist, 2003; Rodriguez,
2004) have limitations and do not provide strong evidence. The most recently published metaanalysis
on formative assessment (Kingston & Nash, 2011) has also already been criticised
for its methodological aspects (Briggs, Ruiz-Primo, Furtak, Shepard, & Yin, 2012).
For meta-analyses to produce meaningful results, they have to focus on a specific topic
in order to include studies that are sufficiently comparable. A key element of the assessment
for learning approach is the feedback provided to students (ARG, 1999; Stobart, 2008).
Various meta-analyses and systematic review studies have focused on the effects of feedback
on learning outcomes (e.g., Bangert-Drowns, Kulik, Kulik, & Morgan, 1991; Kluger &
DeNisi, 1996; Shute, 2008). The outcomes of these studies have not been univocal and were
sometimes even contradictory. Many researchers have noted that the literature on the effects
of feedback on learning provides conflicting results (e.g., Kluger & DeNisi, 1996; Shute,
2008). Given the current state of research, there is need for an updated meta-analysis focusing
on specific aspects of formative assessment. The present meta-analysis concentrated on the
effects of feedback provided to students in a computer-based environment.
4.1.1 Methods for Providing Feedback
In order to compare the effects of various methods of providing feedback, a clear
classification of these methods is needed. In the current study, feedback was classified based
on types (Shute, 2008), levels (Hattie & Timperley, 2007), and timing (Shute, 2008) as
proposed by Van der Kleij, Timmers, and Eggen (2011). The focus of this study was itembased
feedback. This feedback relates specifically to a student‘s response to an item on a test.

الوقت من ذهب

اذكر الله


المصحف الالكتروني