د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

resaerch 4

This article was downloaded by: [77.31.188.135]
On: 25 May 2014, At: 11:42
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Further and Higher
Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/cjfh20
E‐assessment by design: using
multiple‐choice tests to good effect
David Nicol a
a University of Strathclyde , UK
Published online: 08 Mar 2007.
To cite this article: David Nicol (2007) E‐assessment by design: using multiple‐choice tests to good
effect, Journal of Further and Higher Education, 31:1, 53-64, DOI: 10.1080/03098770601167922
To link to this article: http://dx.doi.org/10.1080/03098770601167922
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-
conditions
E-assessment by design: using
multiple-choice tests to good effect
David Nicol*
University of Strathclyde, UK
Over the last decade, larger student numbers, reduced resources and increasing use of new
technologies have led to the increased use of multiple-choice questions (MCQs) as a method of
assessment in higher education courses. This paper identifies some limitations associated with
MCQs from a pedagogical standpoint. It then provides an assessment framework and a set of
feedback principles that, if implemented, would support the development of learner selfregulation.
The different uses of MCQs are then mapped out in relation to this framework using
case studies of assessment practice drawn from published research. This analysis shows the
different ways in which MCQs can be used to support the development of learner self-regulation.
The framework and principles are offered as a way of helping teachers design the use of MCQs in
their courses and of evaluating their effectiveness in supporting the development of learner
autonomy. A key message from this analysis is that the power of MCQs (to enhance learning) is
not increased merely by better test construction. Power is also achieved by manipulating the
context within which these tests are used.
Introduction
Multiple-choice questions (MCQs) are being increasingly used in higher education
as a means of supplementing or even replacing current assessment practices. The
growth in this method of assessment has been driven by wider changes in the higher
education environment such as the growing numbers of students, reduced resources,
modularisation and the increased availability of computer networks. MCQs are seen
as a way of enhancing opportunities for rapid feedback to students as well as a way of
saving staff time in marking. Computer networks enable more flexibility in the
delivery of MCQs (e.g. with delivery at times and places more in tune with student
needs) and, with appropriate software, they automate and speed up marking and the
collation of test results. Compared to paper-based MCQs, the use of online
computer-assisted assessment (CAA) can significantly reduce the burden associated
with testing large student cohorts (Bull & McKenna, 2004).
*Centre for Academic Practice and Learning Enhancement, University of Strathclyde, 50 George
Street, Strathclyde G1 1XP, UK. Email: [email protected]
Journal of Further and Higher Education
Vol. 31, No. 1, February 2007, pp. 53–64
ISSN 0309-877X (print)/ISSN 1469-9486 (online)/07/010053-12
# 2007 UCU
DOI: 10.1080/03098770601167922
Downloaded by [77.31.188.135] at 11:42 25 May 2014
Although multiple-choice testing is widely used in higher education, there are
recognised limitations with this method. Firstly, many researchers discourage the use
of MCQs, arguing that they promote memorisation and factual recall and do not
encourage (or test for) high-level cognitive processes (Airasian, 1994; Scouller,
1998). Some researchers, however, maintain that this depends on how the tests are
constructed and that they can be used to evaluate learning at higher cognitive levels
(Cox, 1976; Johnstone & Arnbusaidi, 2000). Secondly, the feedback provided
through MCQs is usually quite limited as it is predetermined during test
construction. Hence there is little scope for personalisation of feedback based on
different student needs. Thirdly, the use of MCQs is usually driven by the need for
teacher efficiencies and the provision of rapid feedback rather than by robust
pedagogical principles aimed at encouraging effective learning. MCQs require the
selection of a correct answer from a set of alternatives, i.e. the recognition of the
answer rather than the construction of a response. In addition, students have no role
in setting the goals and standards for MCQ tests, nor are they usually in a position to
clarify the test question or its purposes while taking the test (i.e. clarify goals and
standards). It is difficult therefore to envisage how this method of testing addresses
current concerns in the assessment research that students should be given a more
active and participative role in assessment processes (Boud, 2000; Yorke, 2003) or
that assessment should develop in students the skills needed to self-regulate their
own learning (Nicol & Macfarlane-Dick, 2006; Nicol & Milligan, 2006).
This article addresses the above issues. It first provides a framework comprising a set
of principles for thinking about formative assessment and feedback that is grounded in
current research. It then maps the use of MCQs in different assessment contexts into
this framework and illustrates its value using case examples of practice drawn fromthe
literature. This analysis helps enrich our understanding of the ways in which MCQs
can be used to support the development of learner self-regulation. It is argued that a
pedagogical or assessment framework is necessary if teachers are to design effective
uses for MCQs in their courses or if they wish to evaluate their effectiveness. An
assessment framework not only helps teachers analyse the effective uses of MCQs but
it also helps them move beyond the narrow conception thatMCQs are either good or
bad. The case studies illustrate that what is important is not just the content and
format of MCQ tests but the wider context within which they are used.
Assessment for learning: framework and principles
In 2006, Nicol and Macfarlane-Dick analysed a large body of research in the area of
formative assessment and feedback in order to identify how these processes could
help enhance the development of self-direction and a reflective approach in learners.
From this analysis they were able to identify seven principles of good feedback
practice that, if implemented, would support the development of learner selfregulation.
Each principle is defined in detail in Nicol and Macfarlane-Dick (2006)
alongside the supporting research and recommendations for practice. Figure 1
briefly presents the seven feedback principles:
54 D. Nicol
Downloaded by [77.31.188.135] at 11:42 25 May 2014
The work of Nicol and Macfarlane-Dick is consistent with that of other researchers
who have emphasised the need to develop autonomy in learning (Boud, 2000) and
to involve students as active participants in assessment processes (Brew, 1999). The
seven feedback principles are not new: their value is that each principle is supported
by a substantial body of research, that they are all defined in relation to their
contribution to the development of learner self-regulation, and that taken together
they provide a clear lens through which to design and evaluate practice. It should be
noted here that feedback is defined broadly and encompasses informal and formal
processes including the learner generating their own feedback (e.g. through selfassessment)
and peer processes.
There is little space here to discuss each principle in detail but a few key findings are
important. Firstly Principle 1 underpins all the others. In order to self-regulate their
own learning, students must have a reasonable understanding of what is required in
assessment tasks (i.e. their understanding must overlap with that of their teacher’s).
Yet there is considerable research linking poor performance by students to a failure to
grasp assessment requirements (Higgins et al., 2001; Rust et al., 2003). Secondly, the
principles emphasise the power of dialogue in learning; self-regulation is facilitated
when learning involves the active construction of knowledge through group
interaction, peer feedback and discussion (Brew, 1999; Boud, 2000). Thirdly, selfregulation
requires motivation and a belief that effort will produce results. Research
shows that motivation is neither fixed nor completely determined by the environment
and that students construct their motivation based on their appraisal of the learning
and assessment context (Paris & Turner, 1994). However, teachers can influence this
appraisal through targeted interventions such as providing many low-stakes feedback
opportunities, by fostering learning communities, by focusing students on learning
goals rather than marks and by linking formative tasks to summative assessments
(Nicol & Macfarlane-Dick, 2006). MCQs are not normally associated with research
findings of this kind nor with the seven feedback principles. However, the following
analysis attempts to show the value of making such an association.
Overview of application of seven principles in relation to MCQs
Figure 2 summarises the ways in which multiple-choice tests can be used to support
learner self-regulation based on the seven feedback principles. The case studies
which follow provide worked examples of application drawn from the research
literature. In the case studies, the feedback principles are identified within actual
Figure 1. Seven principles of good feedback practice
Using multiple-choice tests to good effect 55
Downloaded by [77.31.188.135] at 11:42 25 May 2014
learning designs. The first two case examples highlight the operation of one or two
principles. However, considerable power is gained when a number of feedback
principles are combined within the same learning design. Case studies 3 and 4 show
ways in which this combination can be achieved.
Case study 1: fundamentals of human physiology
A typical use of MCQs is with first-year courses with large numbers of students. Bull
and Danson (2004) describe a ‘fundamentals of human physiology’ module
Figure 2. Mapping the use of MCQs to the seven principles of good feedback practice
56 D. Nicol
Downloaded by [77.31.188.135] at 11:42 25 May 2014
intended to prepare students for their second year of study. Before the introduction
of MCQs there were three coursework assignments. Although many students passed
these three assignments, many still failed the examination. Part of the problem was
that the feedback was coming too late, halfway through the module. MCQs were
introduced as a replacement for one of the assignments and comprised a series of five
computer-delivered multiple-choice tests staged through the duration of the module.
Each test was related to the teaching material for the previous two weeks of lecturing
and the students received feedback on their answers after each test. More
importantly, the lecturer examined which questions students had performed poorly
on and used this information to provide extra feedback support in those specific
areas at a subsequent seminar. Bull and Danson (2004) report that ‘through the
feedback students gained a clear idea of how they were progressing with the course
and were motivated to follow up some of the feedback suggestions regarding further
reading and research’ (p. 10).
Commentary on Case 1
In this case example, the teacher uses MCQs to achieve a variety of different
objectives. Traditional uses of MCQs implemented here are to enable students to
self-test their understanding (Principle 2, self-assessment) and to provide immediate
feedback on their answers (Principle 3, feedback). The staging of the tests also keeps
students engaged in productive activities during the timeline of the module
(Principle 6, motivation). However, this example also shows the way that the power
of multiple-choice tests for learning can be enhanced.
Extra power in this example is achieved by integrating MCQs with other learning
activities, thereby activating additional feedback principles. The lecturer uses the
results of the students’ performance on the tests to frame the seminar discussion
(Principle 7, feedback shapes the teaching) and to provide extra dialogical feedback in
the seminars (Principle 4, dialogue). This is an example of what Novak et al. (1999)
call ‘just in time teaching’.
Case study 2: medicine
In this example, Gardner-Medwin (2006) uses online MCQs during the first two
years of a medical degree at University College London. However, he has introduced
a critical modification called ‘confidence-based marking’ (CBM). In CBM students
not only select the answer but they also rate their confidence on a three-point scale
(C51, 2 or 3). Both these components determine the mark as shown in Table 1.
When the answer is correct the mark depends on the confidence level (M51, 2 or 3).
If the answer is wrong, then the higher the confidence level the higher the penalty (-2
at C52 and -6 at C53). This procedure encourages students to think deeply about
their own knowledge and about whether they have a reliable reason for choosing the
answer. In effect, students must be able to justify their answer (internally) before it is
sensible to risk a penalty for high confidence.
Using multiple-choice tests to good effect 57
Downloaded by [77.31.188.135] at 11:42 25 May 2014
Commentary on Case 2
Gardner-Medwin (2006) relates CBM to the second and the fifth principles of good
feedback practice. Firstly, by having to rate their confidence students are forced to
reflect on the soundness of their answer and assess their own reasoning (Principle 2,
reflection/self-assessment). Secondly, regular use of this procedure both formatively
and in the final examinations increases students’ confidence in their knowledge
(Principle 5, motivation) and encourages regular practice of these tests online.
Importantly, CBM does not require that the teacher actually collect or analyse the
reasons underlying students’ answers. It is therefore surprising that it is not more
widely used.
Case study 3: interactive mechanics
The next case example is still about the use of MCQs but this time their application
is supported through two technologies—electronic voting systems (EVS) and the
assessment tools in a virtual learning environment (WebCT).
Eight years ago, at the University of Strathclyde, staff in the Department of
Mechanical Engineering made a radical change in their teaching methods for firstyear
students (see Boyle & Nicol, 2003; Nicol & Boyle, 2003). The standard lecture/
tutorial/laboratory format was replaced by a series of two-hour active learning
sessions involving short mini-presentations, videos, demonstrations and problemsolving
all held together by MCQ tests linked to peer instruction. Peer instruction is
a form of ‘teaching by questioning’ pioneered by Mazur at Harvard (1997) using
electronic voting technologies.
A typical peer instruction class, interactive mechanics, begins with the teacher
giving a short explanation of a concept or providing a video demonstration of the
concept (e.g. force in mechanics). This is followed by a multiple-choice question
test. Students respond to the MCQs using handsets (similar to a TV remote) that
send signals (radio frequency or infrared) to receivers linked to a computer. Software
collates responses and presents a bar chart to the class showing the distribution
across the alternatives. In peer instruction, if a large percentage of the class have
incorrect responses, the teacher instructs the class to: ‘convince your neighbours that
you have the right answer’. This request results in students engaging in peer
discussion about the thinking and reasoning behind their answers. After the
discussion the teacher normally retests the students’ understanding of the same
concept. Another strategy is for the teacher to facilitate ‘class-wide discussion’ on the
topic by asking students to explain the thinking behind their answers. The EVS
Table 1. Scoring regime for certainty-based marking
Degree of Certainty Low Medium High No reply
Mark if correct 1 2 3 0
Penalty if wrong 0 22 26 0
58 D. Nicol
Downloaded by [77.31.188.135] at 11:42 25 May 2014
sequence usually ends with the teacher clarifying the correct answer. There are many
other ways of using EVS to facilitate interaction and collaboration and this
technology has been used across a range of disciplines.
More recent developments involve the integration of online MCQs with the
classroom use of EVS. Students are presented with online MCQs before the EVS
session. The teacher uses the results of these online tests to ascertain areas of
misunderstanding and to determine the focus for the EVS sessions. As with Case 1,
‘just-in-time teaching’ (Novak et al., 1999) helps target teaching to students’ needs.
A second innovation is the use of confidence-based marking (CBM) during EVS
sessions. This uses MCQs but students must rate their confidence (certainty) in
their answer. This is being piloted as formative assessment using the marking rules in
Table 1 with the intention of using this as a final assessment method at a later time.
CBM requires that students engage in some meta-cognitive thinking, i.e. it requires
them to step back and reflect on whether there is good justification for their answer
(Gardner-Medwin, 2006).
Commentary on Case study 3
The use of multiple-choice tests in and out of class in interactive mechanics is a
powerful example of an integrated implementation of the seven principles of good
feedback.
(1) Learning goals are clarified through iterative cycles of tutor presentation and
the testing and retesting of concepts using MCQs in class (Principle 1).
(2) Opportunities for self-assessment and reflection are available when the teacher
provides the concept answer at the end of the EVS test sequence. Students also
reflect on their answer during confidence-based marking. Reflection is also
possible after the bar chart presentation of class response (Principle 2).
(3) Teachers normally provide feedback during class in response to students’
questions and at the end of each concept test-discussion sequence to clear up
any misunderstandings (Principle 3).
(4) Peer dialogue is integral to both peer instruction and class-wide discussion.
Specific tutor–student dialogue occurs during class-wide discussion (Principle
4).
(5) The EVS class focuses on learning goals rather than performance goals (i.e.
grading) and there is a step-by-step progression in the difficulty of the concept
questions. Both processes are known to enhance motivation (Principle 5).
(6) The continuous cycle of tests, retests and feedback ensures that students have
opportunities to ‘experience’ a closing of the gap between desired and actual
performance (Principle 6).
(7) A great deal of information is available to the teacher about areas of student
difficulty that is used to shape in-class teaching. The bar chart gives the teacher
instant feedback on difficulties and asking students to explain answers
during class-wide discussion also uncovers conceptual misconceptions. The
Using multiple-choice tests to good effect 59
Downloaded by [77.31.188.135] at 11:42 25 May 2014
information provided through the web-based MCQs also informs in-class
teaching (Principle 7).
Extensive evaluations have been carried out in engineering mechanics showing
significant learning gains (Boyle & Nicol, 2003; Nicol & Boyle, 2003). Overall the
changes have been a huge success both in terms of student end-of-year performance
in exams and in terms of retention. There has been a reduction from 20% noncompletion
to 3%, the largest gain in any course within the university. Also, since the
introduction of concept tests with electronic voting, attendance at class remains high
throughout the year (unlike similar lecture-based classes). Further evaluations of
confidence-based marking are now being carried out. While there is a great deal of
research on the benefits of using of EVS to support learning (see Banks, 2006), this
is the first analysis from a formative feedback perspective. This analysis provides new
insights into how the different component processes (self, peer and tutor feedback)
interact and reinforce each other in a single setting.
Case study 4: organisational behaviour
A key issue in the literature on formative assessment is how to move students from
being dependent on teacher feedback to being able to generate their own feedback
on learning. While the case examples above begin to address this issue by engaging
students in reflective activities and in peer dialogue, there are still some limitations
with these methods. One issue concerns the balance of learner self-regulation and
teacher direction. In the first three case examples, the teacher is still primarily in
control of the students’ learning. It is the teacher who sets the MCQ tests and the
students’ role is merely to respond by selecting an answer: they don’t actively
construct answers. Hence these approaches do not address current concerns that in
order to develop the self-regulatory skills required for lifelong learning, students
must actively participate in the construction of assessment criteria. Indeed in
professional practice, experts both create the criteria that apply to their work and
assess their performance against these criteria (Rust et al., 2005). Higher education
should help develop this capability.
One way of addressing the above issue is to have students construct MCQs rather
than respond to those created by others. This was the approach taken by Fellenz
(2004). He actively engaged students in generating assessment criteria and example
questions within a course on organisational behaviour. Fellenz already had
experience of using MCQs to assess content learning but he was seeking ways of
using MCQs to support higher level and meta-learning. He also argued that
traditional MCQs gave primacy to the instructor perspective and did not reflect
partnership-based and learner-centred education philosophies.
Fellenz (2004) developed what he called the ‘multiple-choice item development
assignment’ (MCIDA). Students were briefed on MCQ construction and in tutorials
they had opportunities to discuss, question and critique MCQs and to learn how to
classify them in relation to Bloom’s (1956) taxonomy of educational objectives.
(MCQ developers often use Bloom’s taxonomy to categorise MCQ items as testing
60 D. Nicol
Downloaded by [77.31.188.135] at 11:42 25 May 2014
for knowledge comprehension, application, analysis, synthesis and evaluation. These
six categories form a hierarchy with knowledge being the lowest level and evaluation
the highest.) After this induction, students were required to create three sets of
MCQs in pairs over the timeline of the course and in relation to the course content.
Specifically, they had to produce the question stem for each multiple-choice
question and one correct and three incorrect answers including the written feedback
comments for all four possible responses. After submission students received peer
feedback on their MCQ items from other students on the quality of design and the
accuracy of, and the justifications for, the feedback answers. Twenty percent of the
course grade was determined by the MCIDA. Over half of the submitted MCQ
items were later used in the end-of-term exam.
Commentary on Case 4
The following is an analysis of this course in relation to the seven feedback
principles:
(1) Students create the MCQs by themselves, hence they must actively formulate
the question in relation to the subject content and determine the assessment
criteria. (This is a powerful implementation of Principle 1.)
(2) Students construct answers for correct and incorrect responses in relation to
the multiple-choice questions. They also evaluate their MCQs against the
Bloom taxonomy (Principle 2).
(3) The tutor monitors the construction process and provides general feedback
(Principle 3).
(4) Peer dialogue and feedback are provided during MCQ creation in pairs and
through tutorial meetings where items are discussed (Principle 4).
(5) The MCQs are used in the final examination and the MCQ construction
process encourages peer sharing and engagement. Both processes enhance
motivation and self-belief (Principle 5).
(6) The development of the items is cyclical with early feedback being used to
improve performance on the later items (Principle 6).
(7) Teaching could be shaped by the developing MCQ outputs, although Fellenz
does not mention this in his paper (Principle 7).
Fellenz (2004) has evaluated his use of the MCIDA through class discussion and
through end-of-course questionnaires. Students report that the MCIDA helps
develop a deep understanding of the course material and encourages collaborative
learning. Fellenz found that the quality of the submitted MCQs improved over time
and that asking the students for justifications for why answer options are correct or
incorrect resulted in a very powerful learning experience. Students had to evaluate
the course content, construct questions and provide compelling arguments in the
feedback justifications. This required that they made ‘explicit their understanding of
the complexities of the subject matter’ (p. 711). Fellenz also reported that his
procedure ‘increases student ownership of the assessment procedures used and
Using multiple-choice tests to good effect 61
Downloaded by [77.31.188.135] at 11:42 25 May 2014
motivates students to participate’ (p. 706). Fellenz did not use technology to support
his MCIDA process but it easy to envisage how an online assessment tool might be
used to support the sharing of MCQs and the peer feedback processes he describes.
Indeed, a recent example of students contructing and sharing MCQ tests using the
Blackboard virtual environment has just been published by Arthur (2006).
Discussion
Fellenz paid significant attention to ensuring that the MCQs produced by his
students were of a very high quality. This required considerable work from the
teacher in preparing students to create these tests and in assessing them against a
range of criteria. However, Fellenz did not report from which year of study his
students were drawn. In our own work with a first-year cohort we have taken the
view that students don’t need to produce extremely high-quality tests as this is
something that even teachers find difficult. Our focus is not the output but the
learning process. Hence, what is important is that the students engage in test
construction and make a reasonable attempt. If the teacher has the skill she/he can
select from those produced by students, and/or build on them for the final
examination. This would still provide some opportunities to create a databank of
reusable MCQ resources that could be used with other student cohorts, which is one
of the advantages of the MCIDA procedure.
A key point of note from the case studies described above is that it is the learning
and assessment design that is the driver for change rather than the technology. In
Case study 3, classroom interaction of the kind described would not be have been
possible without EVS yet it is the increased opportunities for self, peer and tutor
feedback that actually produces the learning gains. Similarly, Case study 4 began
with a powerful assessment design based on learner self-regulation. In our own work
the application of technology has been used to enhance self-regulation through
increased opportunities for resource sharing but within a similar assessment design
to that of Case study 4 (www.reap.ac.uk). Students share MCQ tests during their
construction, comment on them and give each other online feedback. In addition,
the availability of these tests online makes it easier for students to access them when
they are revising for their final examination. In both these case examples although
the driver is the assessment design, the technology does afford significant
enhancements.
In the assessment literature, considerable attention has been directed at the
limitations of MCQs in testing for higher-order cognitive abilities and at how one
might remedy this situation (Airasian, 1994). However, much less attention has been
given to the wider learning context in which MCQs are used and their underpinning
pedagogy. This article has shown that increased power can be leveraged from MCQs
when they are linked to a clear pedagogical goal (in this case, the development of
learner self-regulation) and implemented in relation to a coherent set of principles
(the seven principles of good feedback practice). While the writer of this article
believes that self-regulation encapsulates current thinking regarding the purpose of
62 D. Nicol
Downloaded by [77.31.188.135] at 11:42 25 May 2014
assessment practices in higher education, other pedagogical frameworks might be
applied. For example, other researchers might be interested in how MCQs might be
used to support ‘social learning’ and they might apply a framework based on social
constructivist pedagogy. However, what is meant by effective social learning would
still have to be unpacked and defined, rather as the seven principles have been
defined, if this construct were to guide MCQ use. Finally, while the framework in
this article has been applied specifically to MCQs, the arguments made are
generalisable to other kinds of objective tests, and even to other methods of
assessment (see Nicol, 2006).
References
Airasian, P. W. (1994) Classroom assessment (2nd edn) (New York, McGraw-Hill).
Arthur, N. (2006) Using student-generated assessment items to enhance teamwork, feedback and
the learning process, Synergy: Supporting the Scholarship of Teaching and Learning at the
University of Sydney, 24, 21–23.
Banks, D. A. (2006) Audience response systems in higher education: applications and cases (London,
Information Science Publishing).
Bloom, B. S. (1956) Taxonomy of educational objectives: the classification of educational goals
(London, Longmans).
Boud, D. (2000) Sustainable assessment: rethinking assessment for the learning society, Studies in
Continuing Education, 22(2), 151–167.
Boyle, J. T. & Nicol, D. J. (2003) Using classroom communication systems to support interaction
and discussion in large class settings, Association for Learning Technology Journal, 11(3),
43–57.
Brew, A. (1999) Self and peer assessment in context, in: S. Brown & A. Glasner (Eds) Assessment
matters in higher education: choosing and using diverse assessment (Buckingham, Open
University Press/SRHE), 159–171.
Bull, J. & Danson, M. (2004) Computer assisted assessment (CAA) (York, Learning and Teaching
Support Network).
Bull, J. & McKenna, C. (2004) Blueprint for computer-assisted assessment (London,
RoutledgeFalmer).
Cox, K. R. (1976) How did you guess? Or what do multiple choice questions measure? Medical
Journal of Australia, 1, 884–886.
Fellenz, M. (2004) Using assessment to support higher level learning: the multiple choice item
development assignment, Assessment and Evaluation in Higher Education, 29(6), 703–719.
Gardner-Medwin, A. R. (2006) Confidence-based marking: towards deeper learning and better
exams, in: C. Bryan & K. Clegg (Eds) Innovative assessment in higher education (London,
Taylor & Francis).
Higgins, R., Hartley, P. & Skelton, A. (2001) Getting the message across: the problem of
communicating assessment feedback, Teaching in Higher Education, 6(2), 269–274.
Honey, M. & Marshall, D. (2003) The impact of on-line multi-choice questions on undergraduate
student nurses’ learning, paper presented at ASCILITE 2003: 20th Annual Conference -
Interact, Integrate, Impact, Adelaide, 7–10 December.
Johnstone, A. H. & Ambusaidi, A. (2000) Fixed response: what are we testing? Chemistry
Education: Research and Practice in Europe, 1(3), 323–328.
Mazur, E. (1997) Peer instruction: a user’s manual (Upper Saddle River, NJ, Prentice Hall).
Nicol, D. (2006) Increasing success in first year courses: assessment re-design, self-regulation
and learning technologies, paper presented at the ASCILITE Conference, Sydney, 3–6
December.
Using multiple-choice tests to good effect 63
Downloaded by [77.31.188.135] at 11:42 25 May 2014
Nicol, D. J. & Boyle, J. T. (2003) Peer instruction versus class-wide discussion in large classes: a
comparison of two interaction methods in the wired classroom, Studies in Higher Education,
28(4), 457–473.
Nicol, D. J. & Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a
model and seven principles of good feedback practice, Studies in Higher Education, 31(2),
198–218.
Nicol, D. J. & Milligan, C. (2006) Rethinking technology-supported assessment in terms of the
seven principles of good feedback practice, in: C. Bryan & K. Clegg (Eds) Innovative
assessment in higher education (London, Taylor & Francis).
Novak, G. M., Patterson, E. T., Gavrin, A. D. & Christian, W. (1999) Just-in-time-teaching:
blending active learning with web technology (Upper Saddle River, NJ, Prentice Hall).
Paris, S. G. & Turner, J. C. (1994) Situated motivation, in: P. R. Pintrich, D. R. Brown
& C. E. Weinstein (Eds) Student motivation, cognition and learning (Hillsdale, NJ, Lawrence
Erlbaum).
Rust, C., Price, M. & O’Donovan, B. (2003) Improving students’ learning by developing their
understanding of assessment criteria and processes, Assessment and Evaluation in Higher
Education, 28(2), 147–164.
Scouller, K. (1998) The influence of assessment method on students’ learning approaches:
multiple choice question examination versus assignment essay, Higher Education, 35,
453–472.
Yorke, M. (2003) Formative assessment in higher education: moves towards theory and the
enhancement of pedagogic practice, Higher Education, 45(4), 477–501.
Zakrzewski, S. & Bull, J. (1999) The mass implementation and evaluation of computer-based
assessments, Assessment and Evaluation in Higher Education, 23(2), 141–152.
64 D. Nicol
Downloaded by [77.31.188.135] at 11:42 25 May 2014

الوقت من ذهب

اذكر الله


المصحف الالكتروني