د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

book B76

An important lesson to be learnt is that although the reports from the Computer
Program LOVS have been in use for a couple of years, many users struggle with interpreting
the reports. The authors follow Zenisky and Hambleton (2012) in their advice that test score
reporting should receive considerable attention by test developers even after the initial
developmental stage. Thus, test developers should monitor whether the test results are being
used as intended.
It seems worthwhile to examine whether redesigned score reports would be interpreted
more correctly. Although the researchers acknowledge that the contextual factors (e.g.,
assessment literacy, time, pressure and support) also impact the extent to which the reports are
interpreted correctly, the test developer is primarily responsible for ensuring validity by way
of clear score reports (Hattie, 2009; Ryan, 2006; Zenisky & Hambleton, 2012).
Chapter 5
120
References
American Educational Research Association, American Psychological Association, &
National Council on Measurement in Education (1999). Standards for Educational
and Psychological Testing. Washington, DC: AERA.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education:
Principles, Policy & Practice, 18, 5‒25. doi:10.1080/0969594X.2010.513678
Bosker, R. J., Branderhorst, E. M., & Visscher, A. J. (2007). Improving the utilization of
management information systems in secondary schools. School Effectiveness and
School Improvement: An International Journal of Research, Policy and Practice,
18, 451‒467. doi:10.1080/09243450701712577
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods
research. Thousand Oaks, CA: Sage.

الوقت من ذهب

اذكر الله


المصحف الالكتروني