د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B68

suggestions for improvement. Furthermore, aspects that led to confusion that did not relate
directly to a specific type of knowledge or skill were listed.
5.3.3 Questionnaire
Measurement instruments and procedure. In order to measure the interpretation
ability of the respondents, a questionnaire was constructed in collaboration with the experts.
The test grid was used as a basis for constructing the questionnaire in order to come to a
representative set of items for measuring the interpretation ability on the selected reports. The
plausibility of the alternatives in the questionnaire was evaluated by consulting experts and by
analysing the results of the focus group meetings.
The questionnaire that was used in this study contains 30 items, of which 29 items
have a closed-answer format, and one item has an open-answer format. The item with the
open-answer format was an item in which respondents could leave remarks and suggestions.
The questionnaire contains nine items about the respondents‘ background
characteristics. The respondents were asked questions about the following: Their gender, the
name of their school, their function within the school, which grade they currently teach, their
years of experience teaching primary education, what they consider to be their own ability in
using quantitative test data as a measure for assessment literacy (Vanhoof et al., 2011), their
experience using the Computer Program LOVS, and the degree to which they find the
information from the reports generated by the Computer Program LOVS to be useful
(Vanhoof et al., 2011).
The questionnaire contains twenty items that measure interpretation ability (α = .91).
Of these items, five were intended to measure knowledge and fifteen were intended to
measure understanding and interpretation. All items were related to a visual representation of
a report. In total, seven visual representations with accompanying items were presented. (Two
representations of the pupil report and the group report were provided. The first measured
knowledge; the second measured interpretation.) Two to four items were subjected to the
respondent about each report. The greater part of the items (n = 12) had a multiple response
format, which means the respondent could provide multiple answers. The remaining items
had a multiple-choice format (n = 8), meaning that respondents could only select one answer.
The number of options with each item varied from three to six. Participants were granted one
point per correct answer, which is the most reliable manner for scoring multiple response
items (Eggen & Lampe, 2011). The maximum score on the total questionnaire was 34.
Given that respondents make decisions based on the report, it is of critical importance
that they interpret these reports in the correct manner. Therefore, in consultation with the
experts, a standard was set. It was expected that the users should be able to answer at least
85% of the items correctly. This corresponds with a score of 29 on the questionnaire.
Respondents. For the questionnaire, two samples were drawn from the customer base
of the Computer Program LOVS. The first sample was a random sample consisting of 774
schools. The schools all received a letter requesting them to participate in the study. Schools
could send an e-mail if they wanted to participate with one or more staff members. Data were
gathered from teachers, internal support teachers, remedial teachers, and school principals. In
total, 29 schools signed up for participation in the study (3.7%). Given the large number of
non-responses, the researchers decided to draw a second sample. This sample was not

الوقت من ذهب

اذكر الله


المصحف الالكتروني