د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B56

4.4 Discussion
The purpose of this meta-analysis was to gain insight into the effects of various
methods for providing feedback to students in a computer-based environment in terms of
students‘ learning outcomes. The major independent variable in this meta-analysis was
feedback type (Shute, 2008). Furthermore, the effects of various moderator variables that
seemed relevant given the literature on feedback effects, such as timing (Shute, 2008) and
level of learning outcomes (Van der Kleij et al., 2011), were investigated.
The 70 effect sizes in this meta-analysis were derived from 40 studies and expressed
the difference in the effects of one feedback type compared to another feedback type or no
feedback at all in terms of a test score. The effect sizes ranged from -0.78 to 2.29. Because of
the heterogeneous nature of the collection of effect sizes, a mixed model was used in the
analyses. The majority of the effect sizes (k = 53) concerned the investigation effects of EF in
contrast to KCR, KR, or no feedback. The results suggested that EF was more effective than
KR and KCR. The mean weighted effect size for EF was 0.49, which can be considered as a
moderately large effect. The mean weighted effect size for KCR (k = 9) was 0.32, which is
considered small to moderate. The effect size for KR (k = 8) was very small at 0.05.
KR and KCR were expected to have a small to moderately positive effect (between 0.2
and 0.6) on lower-order learning outcomes (Hypothesis 1). Due to the small number of
observations, we could not meaningfully test Hypothesis 1, but the limited information
Effects of Feedback in a Computer-Based Learning Environment on Students‘ Learning Outcomes: A
Meta-analysis
83
available did not contradict our expectations. In addition, we expected that KR and KCR
would have virtually no effect (below 0.2) on higher-order learning outcomes (Hypothesis 2).
The effects of KCR were slightly larger than expected: 0.38. However, we had insufficient
power to reject Hypothesis 2. EF was expected to have a moderate to large positive effect (at
least 0.4) on both lower-order learning outcomes (Hypothesis 3) and higher-order learning
outcomes (Hypothesis 4). Hypothesis 3 and Hypothesis 4 were not rejected, and the effects of
EF on higher-order learning outcomes (ES‘= 0.67) appeared to be larger than the effects on
lower-order learning outcomes (ES‘= 0.37).
The effects of EF seemed promising, although the nature of EF varies widely. By
categorising feedback effects based on the level at which they are aimed, we attempted to gain
more insight into which method for providing EF was most effective. However, the majority
of the EF was aimed at the task and process level (k = 41), which makes it difficult to draw
any generalizable conclusions regarding the effects of the different feedback levels. The mean
weighted effect size of EF at the task and process level was 0.50. In this meta-analysis, the
effect sizes from EF at the task level (k = 4) were lowest (ES‘= -0.06). The effects of EF at the
task and regulation level (k = 4, ES‘= 1.05) and at the task, process, and regulation level (k =
1, ES‘= 1.29) were highest. These effects can be regarded as very large, which suggests that
more research is warranted regarding the effects of feedback at the task and/or process level in
combination with the regulation level. The results of this meta-analysis are in line with the
results of the systematic review by Van der Kleij et al. (2011), which suggested that the
effects of EF at the regulation level are promising but have not been researched to a great
extent. Consistent with the literature on feedback effects (e.g., Hattie & Timperley, 2007),
adding feedback that is not task related but is aimed instead at the characteristics of the
learner seemed to impede the positive effects of EF (k = 2, ES‘= 0.25). It must be mentioned,
however, that the number of studies examining feedback that is aimed at the self-level in a
computer-based environment is fortunately low.
Furthermore, it was hypothesized that there would be an interaction effect between
feedback timing and the level of learning outcomes (Hypothesis 5). Hypothesis 5 was rejected
because there appeared to be no significant interaction effect between the level of learning
outcomes and feedback timing. The directionality of the effects was, however, consistent with
Hypothesis 5. Namely, it was expected that immediate feedback would be more effective for
lower-order learning outcomes and vice versa (Shute, 2008). However, possibly due to a lack
of power, statistical significance was not reached. More research is needed to shed light on the
possible interaction between feedback timing and the level of learning outcomes.
In addition, to evaluate the relationships between study attributes and effect sizes
simultaneously, a weighted regression analysis was conducted. This analysis pointed out that
delayed feedback and primary and high school negatively affected the ES‘ estimates.
Furthermore, EF and the subject areas social sciences, science, and especially mathematics
positively affected the ES‘ estimates. Moreover, the effect of mathematics was strikingly
high. However, this effect was only based on eight studies. Of these studies, only two did not
include EF, which makes it likely that the high effects are undeservedly attributed to the
subject mathematics. Furthermore, it must be mentioned that the literature does not show any
consistent positive effects of feedback in mathematics (e.g., Bangert-Drowns et al., 1991;

الوقت من ذهب

اذكر الله


المصحف الالكتروني