د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B89

Measurement instruments and procedure. Three groups of experienced users were
consulted, with a minimum of three years of experience with the Computer Program LOVS.
Moreover, the changes were evaluated in consultation with internal support teachers that were
non-users of the Computer Program LOVS. These non-users did use the LOVS tests, but used
a different computer program for transforming the test scores into score reports. Thus, they
were not familiar with the reports of the Computer Program LOVS, but possessed the
prerequisite knowledge and skills to interpret these reports. These non-users were consulted to
provide an indication regarding the expected effects of the redesigned reports amongst new or
inexperienced users.
During face-to-face group interview sessions that lasted between 30 and 45 minutes,
in-depth quantitative and qualitative data were gathered. The researcher showed the original
and revised reports (both the prototype and the already implemented report) to the
participants, along with details of the changes made to the reports. The participants were
stimulated to discuss the specific aspects of the redesigned reports with one another and the
researcher. The respondents also individually filled out a paper questionnaire, which outlined
every change made for each report. For each design solution, they were asked to indicate
whether they thought that the particular change made it easier to interpret the report, and to
explain their opinion.
Chapter 6
146
Respondents. The experienced users in the consultation stage with the key informants
were teachers (n = 2), internal support teachers (n = 8), and principals (n = 5). Three groups of
experienced users were consulted (n = 5 for each session). The first group of respondents
consisted of internal support teachers. In this session, the non-user internal support teachers
also participated (n = 15). The second group of respondents consisted of teachers (n = 2), an
internal support teacher (n = 1), and principals (n = 2) of one school board. The third group of
respondents was composed of internal support teachers (n = 2) and principals/policymakers (n
= 3).
Data analysis. The data analysis primarily focused on the data from the group of
experienced users. The data from the non-users were used to cross-validate whether the
changes would help novice users of the Computer Program LOVS.
For each change, the respondents were asked to answer the question, ―Do you think
this is an improvement?‖ These quantitative data were coded (no = 0, yes = 1). The responses
to the individual questionnaires were typed and added to the quantitative data set.
Additionally, the transcriptions of the audio recordings were used.
First, the quantitative data were analysed. A cut-off score of .80 was used, which
means that a greater than.80 agreement rate amongst users would lead to a positive advice for
the proposed change. The qualitative data were used to determine whether any minor
refinements to the proposed change were needed. For the aspects in which the agreement rate
was below .80, it was examined whether and how the change could be best implemented,
using the qualitative data from both users and non-users. Furthermore, using the quantitative
data, it was evaluated whether there were differences between the opinions of users and nonusers
for each aspect.
Results. Within the group of users, the mean agreement rate across all reports ranged
from .54 to 1 (M = .89, SD = .32). In the group of non-users, the mean agreement rate ranged
from 0.67 to 1 (M = .91, SD = .28).
In the pupil report, the mean agreement rate was below .80 for only one aspect, i.e.,
.79 for the changes made to the score intervals. Some users found the display of the score
intervals in the graph bothering or confusing, especially for parents. Other users expressed
that while they considered this feature useful, the option for hidden score intervals should also
be available. Some of the non-users admitted they did not see the added value of the score
intervals.
In the report on ability growth, none of the mean scores was below .80. Nevertheless,
the qualitative data provided useful advice for refining the design solutions. For example, the
labelling of one of the axes appeared confusing, and the users suggested useful alternatives.
The lowest agreement rates were found in the group report, where three of the five aspects
were rated below .80. These changes concerned the addition of ―comparison all schools‖ and
the black frame around the level of the group, which were supposed to illustrate that the
norms for groups differ from those for individual pupils with regard to the level indicators
(see Figure 6.8). However, the users indicated that the proposed design solution did not
support correct interpretations to a sufficient degree, since the display of the group‘s ability is
a weighted mean, but the accompanying level indicator is not because it comes from a
different norm table. The users proposed placing the level of the group elsewhere to avoid
confusion, or framing the information regarding it. Furthermore, some of the respondents
Towards Valid Score Reports in the Computer Program LOVS: A Redesign Study
147
expressed dissatisfaction with the proposed changes in the use of colours within the level
indicators A–E and I–V (.54 mean agreement rate). The users‘ opinions concerning the use of
colours varied widely. Some users preferred a particular colour set because of its nonnormative
character (e.g., ranging from blue [highest-scoring pupils] to brown [lowest-scoring
pupils]). However, others preferred the colours green to red because they provided a natural
interpretation and a good signalling function, consistent with Brown‘s (2001) research
findings.

الوقت من ذهب

اذكر الله


المصحف الالكتروني