د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B86

6.6. Evaluation and Reflection
This section describes the evaluation and reflection phase. The researchers want to
emphasise, however, that the design and evaluation phases occurred in iterative cycles and
was not a linear process.
The preliminary designs of the reports were evaluated in a questionnaire, which served
as a post-test. It was expected that the redesigned reports would be easier to interpret than the
original ones; therefore, the respondents were asked to indicate their opinion on this issue.
Furthermore, the redesigned reports were evaluated in consultation with the two focus groups.
These evaluations not only served to identify how the designs could be further improved, but
also had to indicate how effective the intervention was in terms of interpretation accuracy.
Subsequently, the preliminary designs were adapted as needed, whereupon some of the easily
adaptable revisions were implemented. Finally, key informants were consulted to gather
detailed feedback on the proposed and implemented design solutions. Eventually, a final
design solution was proposed.
6.6.1 Questionnaire
Instruments and procedure. The online questionnaire used by Van der Kleij and
Eggen (2013) was adapted for the evaluation of users‘ interpretation of the redesigned reports.
The items for the current questionnaire were chosen based on both content and psychometric
grounds. For example, items that had very high P‘-values (average proportion correct) were
excluded because they were not informative. Furthermore, images of the reports were
replaced by images of their redesigned versions (see Figure 6.8 for an example). The contents
of the items remained unchanged. However, it was necessary to change the scoring for one
item, given the adapted image of the report.
Two versions of a 30-item questionnaire were produced, each containing different
anchor items that belonged to an image of the original report. This was needed to compare the
results between the original and redesigned reports. For measuring user ability, 13 items were
used, which had either a multiple-choice or a multiple-response format. These items were
scored as 34 separate dichotomous items. Also, 10 items concerned the respondents‘
background characteristics, such as their function within the school, their years of experience
with the Computer Program LOVS, and whether or not they had received training in the use
of the Computer Program. Furthermore, the questionnaires contained six items asking the
respondents to indicate whether they thought that the redesigned reports were easier to
interpret (with the original and redesigned versions displayed next to each other). For these
items, a five-point Likert scale was used, ranging from totally disagree to totally agree. In
addition, one open item allowed the respondents to leave comments.
The questionnaires could be filled out during a two-week period. Respondents could
leave their e-mail addresses if they wanted to receive elaborated feedback on how to correctly
interpret the reports after the closing date of the questionnaire.

الوقت من ذهب

اذكر الله


المصحف الالكتروني