د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المشارك بكلية التربية بالزلفي

book B10

Score reports should be carefully designed in consideration of various relevant aspects,
such as the proposed uses of the test, the level of the reporting unit (individual student, class,
school, state, country, etc.), the desired level of specificity of the report, and the purpose of
the particular report. For example, it would be very useful to provide subscores for various
aspects of the test to an individual student so that the strengths and weaknesses of that student
could be examined. This would improve the formative potential of the report. Nevertheless,
providing such subscores in a report that provides information on the performance of an entire
class would probably overwhelm the user. Hence, it is often appropriate to offer multiple
reports that focus on a particular level and have a specific reporting purpose (e.g., reporting
performance related to a certain standard, or reporting growth). It is also possible that
overviews of correctly and incorrectly answered items could be provided. Supporting users in
interpreting pupils‘ assessment score reports has recently been addressed as an important
aspect of test validity because it is a precondition for the appropriate use of the test results
(Hattie, 2009; Ryan, 2006; Zenisky & Hambleton, 2012).
Feedback timing. There is a wide range in possibilities with respect to timing in the
provision or availability of reports. Whenever the test has been administered through a
computer, it is often possible to generate reports immediately at the individual level.
Nevertheless, the time between taking the test and the availability of feedback on the test
results is often longer in large-scale testing programs. The length of the feedback loop should
be appropriate given the intended uses of the test results. The quick provision of feedback on
the results is the most important when providing information about the performances of
individual students because their abilities change continuously. When the test results are used
only for making decisions at a higher aggregation level, it is generally acceptable that the
feedback loop covers a longer time span (Wiliam et al., 2013).
1.3 Computers in Educational Assessment
The interest in the use of computers to support or administer educational assessments
has increased rapidly over the last decades. Indeed, using computers in educational
assessment can have practical advantages, but more importantly, they offer pedagogical
advantages. In this section, the potential of computers for formative assessment are outlined.
1.3.1 Computer-based Assessment
Computer-based assessment (CBA) or computer-based testing (CBT) is a form of
assessment where students take a test in a computer environment. Since the term CBT has
mainly been used to refer to assessment for summative purposes (e.g., Association of Test
Publishers, 2000; The International Test Commission, 2006), the term CBA will be used in
this dissertation because it has often been associated with computer-based learning tools and
assessments that aim to serve formative purposes.
CBA has practical advantages over paper-and-pencil tests, the most important of
which are higher efficiency, reduced costs, higher test security, and the possibility of applying
automatic scoring procedures (Lopez, 2009; Parshall, Spray, Kalohn, & Davey, 2002). These
advantages are particularly useful in large-scale summative assessments, which makes CBA
attractive to use in large educational institutions, since the test can be administered to a large
group. Hence, most CBAs have been developed for use in higher education (Peat & Franklin,

الوقت من ذهب

اذكر الله


المصحف الالكتروني