د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B57

Kingston & Nash, 2011). Therefore, the positive results of studies conducted in the field of
mathematics must be interpreted with caution.
A limitation of this meta-analysis—and of review studies in general—is the
impossibility of retrieving all relevant studies. Moreover, this meta-analysis only included
published work with the exception of unpublished doctoral dissertations. The authors
deliberately chose to exclude unpublished sources because there is no way of objectively
retrieving these works. Besides, it was also unclear whether they had been subjected to peer
review, which is a generally accepted criterion for ensuring scientific quality. With that
exception, no strict requirements were established with regard to the quality of the included
studies. Studies with low quality were usually judged as being of low quality as a result of
their limited sample size, which automatically would result in a low weight compared to
studies of a higher quality. Furthermore, many studies had to be excluded because they did
not provide sufficient information for computing an effect size.
For some studies, multiple effect sizes were coded because the studies included
multiple experimental groups. Therefore, the effect sizes within the dataset were not
completely independent. Multilevel analysis can address this nested data structure
appropriately, but in our case, the ratio of effect sizes within studies (70/40) was too small to
conduct a multilevel analysis.
Another limitation of this meta-analysis was that it included insufficient data to
meaningfully compare feedback effects across school types. Namely, the majority of the
studies was conducted at universities, colleges, or other places of adult education. Given the
low number of studies in secondary education (n = 6) and the even lower number of studies in
primary education (n = 2), the degree to which the conclusions of this meta-analysis apply to
young learners is questionable. The results suggest that there is reason to believe that
feedback mechanisms function differently within these various school types. Moreover,
providing feedback in the form of text may not be appropriate for younger learners since their
reading abilities might not be sufficiently developed to fully understand the feedback and
subsequently use it. However, current technology makes it possible to provide feedback in
many ways (Narciss & Huth, 2006). For example, the feedback could be channelled to the
students by audio, graphical representations, video, or in a game. Unfortunately, only a
limited number of studies included in this meta-analysis used multimedia feedback (n = 7,
e.g., Narciss & Huth, 2006; Xu, 2009), which means that no meaningful comparison could be
made in this meta-analysis with respect to feedback mode. The results do suggest, however,
that the area of multimedia feedback is one that needs further exploration.
It is striking that in most of the studies in this meta-analysis, the researchers assumed
that the learners paid attention to the feedback provided. The results of recent research
suggest, however, that in a computer-based environment, some students tend to ignore written
feedback (e.g., Timmers & Veldkamp, 2011; Van der Kleij et al., 2012). Variables like
motivation and learners‘ perceived need to receive feedback play an important role in how
feedback is received and processed (Stobart, 2008). These variables therefore intervene with
other variables that contribute to feedback effectiveness, such as type and timing.
Nevertheless, based on currently available research, it is not possible to examine the interplay
of these variables thoroughly. In the experiments by Timmers and Veldkamp and Van der
Kleij et al., the time students chose to display feedback for each item was logged as an

الوقت من ذهب

اذكر الله


المصحف الالكتروني