د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

resaerch 1

DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
Citation
Dermo, J., Boyne, J. (2014) Assessing understanding of complex learning outcomes and real-world skills using
an authentic software tool: a study from biomedical sciences, Practitioner Research in Higher Education
Journal, 8(1), January, 101-112.
101
Assessing understanding of complex learning
outcomes and real-world skills using an
authentic software tool: a study from
Biomedical Sciences
Practitioner Research
In Higher Education
Copyright © 2014
University of Cumbria
Vol 8 (1) pages 101-112
John Dermo and James Boyne
University of Bradford
[email protected]
Abstract
We describe a study conducted during 2009-12 into innovative assessment practice, evaluating an
assessed coursework task on a final year Medical Genetics module for Biomedical Science
undergraduates. An authentic e-assessment coursework task was developed, integrating objectively
marked online questions with an online DNA sequence analysis tool (BLAST), routinely used by NHS
and research professionals. The aim was to combine the assessment of understanding of complex
module learning outcomes with real-world authentic skills highly valued in the work place. This
approach challenges the oft-heard accusation that online computer-marked tests can lack validity
and authenticity in higher education. The study demonstrates the content and construct validity of
this form of e-assessment, showing that careful question design, allied with integration with the real
life BLAST tool, enables instructors to assess complex higher order understanding, and requires
students to demonstrate skills relevant for the work place. A study of three years of test results and
measures of internal consistency data also show the reliability of this assessment. In addition, the
results of surveys of student opinion and positive feedback from student module feedback
questionnaires suggest that it is effective in terms of face validity.
Keywords
Authentic assessment; technology enhanced assessment; assessing deeper learning.
Background and Rationale
For some time in higher education, students have been calling for innovative assessments which
focus on understanding and application of knowledge instead of memorised techniques (National
Student Forum, 2009; National Union of Students, 2009), and which require learners to engage in
appropriate learning tasks (Boud et al., 2010). This is especially true of final year students, whose
learning outcomes will tend to concentrate on higher order cognitive skills (Bloom, 1956).
However, when there are large numbers of students on a module cohort, it can be a challenge to
design assessments which are reliable, valid and practical (Brown and Knight, 1994). In addition,
whilst it is well known that multiple choice questions can easily and flexibly be delivered to large
numbers of students, and these can be automatically objectively marked and graded reliably, with
instant results and feedback given to students (Bull and McKenna, 2004; Crisp, 2007), there is some
debate in the literature as to whether these objectively marked questions are only limited to testing
lower order skills, and it has been argued that these question types need to be used more
imaginatively to engage students in the assessment and learning process (Nicol, 2007; Nicol and
Macfarlane-Dick, 2006; Gibbs and Simpson, 2004).
Historically, computer assisted assessment in the biosciences has tended to comprise paper-based
multiple choice questions assessing student knowledge and understanding at a more superficial
level. However, numerous recent innovative projects and high-profile support for e-learning from
JISC (2007; 2010) have demonstrated a wide range of benefits offered by e-assessment, many of
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
102
which are directly appropriate to this study and include: greater variety and authenticity in the
design of assessments, capture of wider skills and attributes not easily assessed by other means,
efficient submission, marking, moderation and data storage processes (Bryan and Clegg, 2006;
Jordan, 2013).
In this case study we were interested in developing a problem-based assessment that replicates the
kind of investigatory data analysis a medical geneticist might undertake when diagnosing a patient
and thus represented a ‘hands-on’ type of learning exercise (Sivan et al., 2000). Genetic disorders
are routinely diagnosed by DNA sequence analysis. Raw DNA sequence is essentially a series of
letters (A, T, C and G – the genetic ‘code’) that are generally analysed using on-line software called
Basic Local Alignment Search Tool (BLAST). The BLAST software is operated by The National Centre
for Biotechnology Information (NCBI), which advances science and health by providing free access to
biomedical and genomic information. The way most people use BLAST is to input a DNA sequence as
a query against all of the public sequence databases, pasting the sequence into the textbox on one
of the BLAST Web pages. This sends the query over the Internet, the search is performed on the
NCBI databases and servers, and the results are posted back to the person's browser in the chosen
display format, usually within sixty seconds of submitting the query (Madden, 2002). Experience
with this tool represents an applied skill that would directly benefit graduates who chose to enter a
scientific career involving genetic analysis (a discipline that expands year-on-year across the
research, diagnostic and pharmaceutical sectors) and also provides an excellent interactive tool
around which to frame a problem-based assessment that tested the students’ knowledge and
understanding of the modules learning outcomes.
The aim was to create a coursework test for this module which met the following requirements:
first, it must be practical, able to be delivered to large numbers of students, marked automatically
and administered by a small module team; second, this must be a valid test, in terms both of content
validity, with the assessment constructively aligned with the learning outcomes of a year three
module (Biggs, 1999), and in terms of face validity, since it is important for the students to consider
the assessment to be a ‘good’ test (Dermo, 2009). In addition, the test must be secure and reliable.
The major logistical challenges to be addressed during development of the assessment were
ensuring that students who had access to the Internet (BLAST runs via browser and requires an
active Internet connection) did not retrieve material that would jeopardise test security and
invalidate the summative assessment.
The research questions which this study aimed to answer were:
 Was it possible to devise a test which could be delivered securely and administered
economically to approximately 150 students, and marked automatically?
 Was this test a valid and authentic assessment of the learning objectives for the level 6 (i.e.
final year undergraduate) module, challenging the students at an appropriate level?
 Was this test a reliable measure of achievement?
 Was the test viewed by the students as a valid assessment at their level?
Assessment Tools and Processes
The University of Bradford operates a dedicated e-assessment suite, designed to run high-stakes
assessment via a thin-client based server array (Richardson et al., 1998; Dermo, 2011). One of the
advantages of this system is that terminals can be modified so that web browsers lack address
toolbars and open directly at the NCBI homepage, thus confining student access to the BLAST
software during the summative assessment. Moreover, these terminals are linked to a secure server
that enables students to resume assessments on any of the other terminals in the suite should a
terminal fail and also serves to provide high-stakes encryption of assessment data.
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
103
Questionmark Perception (QMP) is the University of Bradford’s chosen tool for summative highstakes
e-assessment. It is widely used for formative and summative online assessments, supports
varied question types, can deliver generic pre-prepared feedback on a question level or by topic and
is an established tool for assessment and feedback in the biosciences (Olson and McDonald, 2004).
Many of these question types are closed questions, which can be automatically marked by the
system enabling rapid turnaround and same-day release of marks to students.
In order to introduce students to the e-assessment and familiarise them with the BLAST tool,
formative exercises related to the assessment were made available via the University of Bradford’s
virtual learning environment (Blackboard). This enabled students to practise both the style of
questions that would appear in the summative assessment and gain some self-directed experience
in using BLAST throughout the module. Formative micro-assessments such as this have a proven
track record in Bioscience degrees (Thin, 2006). This continuous assessment process was supported
by a dedicated formative workshop in the e-assessment suite where students had the opportunity to
attempt formative questions and interface with the BLAST tool in a manner consistent with the
summative exam, thus gaining some insight into how their learning was developing (Ramsden,
1999). This is good practice as laid down by the University's policy for computer assisted assessment.
The summative e-assessment comprises forty closed questions delivered online: various question
types were used, including drag and drop, fill in the blanks, hotspot, matching, multiple choice,
multiple response, numeric, pull-down list, ranking and select-a-blank. These questions relate to
‘genetics case studies’ that present the student with DNA sequence and task them to analyse this
information using the BLAST tool and use the data to answer the questions. This format, therefore,
establishes a process whereby the learner is required to apply knowledge and expertise when
interpreting the results of the BLAST search tool combined with knowledge and understanding of the
module learning outcomes. An example of a case study and related question can be seen in Figure 1.
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
104
Figure 1. Screen capture of the BLAST software data and associated multiple choice question taken
from a previous iteration of the e-assessment. Note the requirement to be able both to interpret the
BLAST data and possess module specific knowledge relating to that data in order to correctly answer
the question.
As it was originally envisaged and designed, the online test itself consists of two simultaneous
browser sessions. In one browser window, the students have access to the actual BLAST tool. In the
second browser, the students are delivered an online test in which they are given a DNA sequence,
which they copy and paste into the BLAST tool for analysis. Students are then shown a number of
questions about the genetic data they have just inputted into BLAST. The students have to read and
understand the data sent back to them by BLAST and answer the questions. In this way the students
have to be able to use the BLAST tool appropriately, and understand and interpret the data sent
back to them. This replicates closely how the BLAST tool is used in the work place. Although in the
real world they would be using the data to inform decisions, not to answer MCQs, the students are
applying their knowledge of the subject to be able to answer the questions correctly and this can
certainly be considered to be an authentic assessment task (Boud, 2000; Boud and Falchikov, 2006).
The assessment has been administered over three academic years and item analysis has routinely
been run on item performance. After use, items are kept secure in a virtual item bank and new
questions are added to the bank each year. The test was deliberately designed so that new
questions could be quickly and easily developed, based on the learning outcomes. It is anticipated
that soon a complete and comprehensive bank of secure items will be established for use in future
years. Of course, quality assurance checks are carried out in conjunction with external examiners, as
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
105
with any high stakes assessment items, according to the UK Quality Code for Higher Education (QAA,
2011).
Methods
For this case study, test result statistics and student questionnaire data were gathered and analysed
in order to answer the research questions above. As is the case with many educational studies, a
mixed methods approach is appropriate, combining qualitative and quantitative approaches to data
collection and analysis (Pring, 2004).
Quantitative Data Analysis
Three years of coursework scores were collected from 2010 to 2013. A post hoc analysis of these
data was carried out to investigate whether there was an acceptable distribution of scores and
mean scores which would indicate that the assessment has been sufficiently challenging for the
students, in line with normal procedures for quality assurance. Objectively marked questions have
been criticised for not assessing at a high enough level for higher education: if these tests can show
a high enough mean score and can approach normal distribution, then these criticisms can be
challenged. In addition, the reliability of the assessment was measured with a test of internal
consistency (Cronbach’s Alpha).
Qualitative Data Analysis
A specific student questionnaire was delivered to a student cohort (in year 2 of the study). This
consisted of closed Likert scale opinion questions, using the scale strongly agree / agree / neutral /
disagree / strongly disagree, as well as open-ended questions to elicit comments on positive aspects
as well as constructive criticism. The questions used in the student questionnaire were selected
based on the issues arising from the literature, as outlined above, and can be seen in Figure 2.
below:
1. Learning to use the BLAST software to analyse DNA sequences is well suited to the eassessment
facilities at UoB
2. The extended MCQs tested my understanding (as opposed to simply recollection) of the
module material
3. This e-assessment assesses things that would not have been possible on a paper-based test
4. The use of BLAST and this e-assessment are simply a gimmick and do not benefit my
learning.
5. Do you have any positive comments about this assessment?
6. Have you any suggestions how this assessment can be improved?
Figure 2. Questions used for student qualitative data questionnaire.
In addition, the researchers were able to search through data from the university module evaluation
questionnaires completed by students over the course of the three years. This generic questionnaire
does not specifically address the BLAST assessment, but there is space for additional student
comments, where we might expect to find references to the assessment on the module.
Results and Findings
Quantitative Data Analysis
The first iteration of the summative e-assessment ran in November 2010 and has run for three
academic cycles. In order to compare student performance in the e-assessment across all three
iterations, the distribution of marks for each cohort was calculated in relation to the percentage of
students from each year and mean values plus standard error of the mean derived. These data can
be seen in Figure 3a., which demonstrates a normal-like distribution of marks. Similarly, the mean
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
106
mark obtained in the e-assessment over three academic years was calculated alongside standard
deviation and standard error of the mean (Figure 3b.).
Figure 3. Histogram showing distribution of percentage coursework scores 2010-2013 and a table
showing mean percentage scores by year, with standard deviation and standard error of
measurement.
The distribution of coursework scores for each of the three different cohorts of students in this study
also approximate normal distribution, with mean scores ranging from 48.8% to 52.1% (see Figure 4.).
2010 2011 2012
Mean 48.78 50.85 52.08
SD 13.02 13.91 14.12
SEM 1.14 1.14 1.06
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
107
Figure 4. Histograms showing the performance of each year’s student cohort, indicating a normallike
distribution of scores and consistent performance from year to year.
In terms of inter-item reliability, the test performed well from year to year, with a Cronbach’s alpha
value between 0.7 and 0.8 for each administration of the test (see figure 5.). This would generally be
considered as a good rating as an internal consistency estimate of reliability of the test scores (Kline,
1999).
Test reliability: Cronbach's Alpha
2010-11 0.77
2011-2 0.78
2012-3 0.70
Figure 5. Assessment reliability 2010-2013.
Qualitative Data Analysis
A student questionnaire was distributed to the 2011-2 student cohort, with a response rate of over
75 per cent (n = 115/150 = 76.67%). Collated student responses, (as outlined in Figure 6.) reveal that
a large majority of respondents (83.5%) agreed or strongly agreed that the BLAST test was definitely
well suited to the facilities. Almost three-quarters (72.2%) or students stated that they believed that
the questions assessed understanding rather than factual recall, and approximately two thirds
(64.4%) were of the opinion that e-assessment can enable activities which would not be possible on
paper. A similar number (70.4%) believe that this assessment was more than a gimmick and does
benefit learning.
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
108
Figure 6: Percentage responses to Likert scale opinion questions on student questionnaire.
In addition, respondents were able to write comments in response to the open-ended prompt ‘Do
you have any positive comments?’ These free responses were analysed and collated and the most
frequent themes were identified.
Students mentioned that the test was a good assessment of understanding and application of
knowledge, which concurs with the second question on the questionnaire. It was also stated that the
test was well organized, clear and easy to use, and it was recognized that marking and feedback are
quicker. Students found the process helpful for their revision and study and valued the BLAST tool as
useful to them. They also appreciated seeing a new approach to assessment and some liked the fact
that the task was challenging for them. Figure 7. contains some examples of the positive comments
raised by students to the open-ended questions.
Strongly
Agree Agree Neutral Disagree
Strongly
Disagree
Learning to use the BLAST software to
analyse DNA sequences is well suited to the
e-assessment facilities at the UoB. 29.6 53.9 10.4 2.6 3.5
The extended MCQs tested my
understanding (as opposed to simply
recollection) of the module material. 25.2 47 17.4 5.2 5.2
This e-assessment assesses things that
would not be possible on a paper-based
test. 18.3 46.1 19.1 12.2 4.3
The use of BLAST and this e-assessment are
simply a 'gimmick' and do not benefit my
learning. 3.5 7 19.1 39.1 31.3
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
109
Figure 7: Some quotes from students, typifying some of the positive themes emerging from open
student responses on the questionnaire.
In addition, respondents were asked to offer constructive criticism. The most frequent suggestions
and criticisms were related to the fact that some students found the test too difficult, or that it
covered too much content. There were also some comments related to technical issues related to
the BLAST tool or IT issues in general, especially concerning slow response times, waiting and the
pressure and stress which this caused.
The researchers also examined the completed general module evaluation feedback questionnaires
for the Medical Genetics module over three years. Only a few such comments were identified, and
these typically gave positive reflections on the experience, reiterating the positive themes emerging
from the questionnaire responses. Figure 8 shows typical comments which were found.
‘The coursework elements were well received. Using the blast software was something novel and
never experienced before in a degree capacity.’
‘The individual assessment ran quite well on the computers and it also allowed for quick feedback
to be gained in terms of results.’
‘20% computer based examination is a lot better than MCQ based exam as it taught me how to use
BLAST well. I like the fact that we are examined in different area this year.’
‘I enjoyed most of the lectures, particularly enjoyed learning about diseases. The first assessment
was difficult but it really pushed me to go through the lecture slides.’
Figure 8. Extracts from module feedback questionnaires.
Discussion
Student performance and qualitative feedback broadly demonstrate that the e-assessment has met
expectations with regard to providing a challenging and innovative assessment and providing
students with discipline-relevant skills.
In terms of the research questions specified earlier, we can conclude that it is possible to devise a
test which can be delivered securely and administered economically to large groups of students.
Using an authentic online research tool alongside online questions, this test provides a reliable, valid
and authentic assessment of the learning objectives for the module, offering a challenging
assessment of their skills. In addition, qualitative data supports the idea that the test was certainly
viewed by the students as a valid assessment.
However, these data also highlight several areas where improvement and modification can be made,
especially with regard to learner support and risk management.
Arguably, the most innovative aspect of the e-assessment presented in this article is the interface
between data obtained using the BLAST software and online questions, which requires students to
interpret BLAST data and link it to module-specific learning outcomes. A greater emphasis on the
problem-solving aspect of questions and relying less on the learner’s ability to recall key terms and
phrases from lectures is something we wish to implement in future versions of the e-assessment.
One route for this is to train students to use the BLAST software beyond basic DNA sequence
analysis. This would enable far more complex case studies to be written that include problems which
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
110
require participants to navigate through the BLAST software in a more involved manner in order to
obtain the data relevant to the question being answered. Such questions are inherently more
interactive in nature and begin to approach ‘real-world’ scenarios, two elements that others have
shown to both motivate students and encourage strong independent learning (Mustoe and Croft,
1999). We are also interested in the possibility of running formative team-based learning (TBL)
sessions around this assessment. Such collaborative learning environments would be well suited to
the formative session that aims to develop student proficiency with the BLAST software. There is
strong evidence that demonstrates such TBL sessions foster active learning and improves critical
thinking amongst students (Allen and Tanner, 2005; Herreid, 2013), two goals that are consistent
with the overarching aims of the e-assessment.
The mean mark for the BLAST e-assessment for the three cohorts is 50.57%, which demonstrates
that students find the assessment somewhat challenging. While the Cronbach Alpha coefficients
suggest that internal consistency of the questions is sound, it is important to reflect upon the ratio of
different question types used throughout the assessment and adjust these so that the test remains
challenging, but enables a normal distribution of marks. As can be seen in Figure 3a., the data
gathered on student performance over three years approaches a normal distribution, but there is an
overrepresentation of candidates in the 40-49% bracket of marks. Currently, of the forty questions
that comprise the summative assessment, fifteen of these are multiple response questions (MRQs),
where students need to select more than one option to gain full marks. Such question types are
valuable when attempting to generate more authentic objectively-marked assessments, however,
there is also a risk that the complex nature of MRQs may lead to an inability to discriminate between
weaker and stronger students and thus negatively impact on test quality (McAlpine and Hesketh,
2003). On reflection, we feel that MRQs may be slightly overrepresented in the e-assessment and
intend to reduce the number of these question types from fifteen down to ten in the 2013 iteration
of the test.
As discussed earlier, the University of Bradford’s high-stakes assessment suite utilises a thin-client
based server array that enables restricted access to websites. One unfortunate caveat of using the
BLAST software is that the NCBI website also contains pages with significant information about the
structure, function and medical relevance of genes. This means that allowing students access to
NCBI website to use the BLAST tool also permits them to see material that (if accessed) would
invalidate the summative closed-book assessment. Currently, these risks are mitigated by invigilation
by four academics, one for every 25 students, but we are keen to explore other options. Crucially,
the work-relevant aspects of interpreting data derived from the BLAST software needs to remain in
situ as losing this would undermine the remit of the e-assessment and render it unfit for purpose.
A second risk management issue relates to the response time following a request being sent to the
BLAST server. Student feedback provided in the questionnaire raised concerns over this process
being both lengthy and unpredictable. The length of time it takes BLAST to return data can vary
between a few seconds to a few minutes, depending on the complexity of the requested search and
the time of day. Slower response times reflect peak-usage and this tends to coincide with daylight
hours in the U.S. Obviously, it is imperative that all students have an equivalent assessment
experience and therefore these concerns need to be addressed.
One possible solution to both of the issues discussed above would be to present students with
‘screen capture’ images taken from the BLAST software during the summative assessment, rather
than requiring the student to access the BLAST data online. Such an approach would do little to
reduce the validity and authenticity of the assessment, as the key skills associated with the BLAST
software are in data analysis, rather than the actual entry of query sequence, the latter simply
requiring student to ‘copy and paste’ a string of letters into the BLAST search field. This change
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
111
would serve to remove any variation in assessment experience caused by varying BLAST search
times and remove the possibility of candidates accessing inappropriate materials via the Internet
during the exam. It would also reduce the amount of software running on the thin-client server from
two (online test and web browser with BLAST) to one, which is likely to improve the stability of the
e-assessment. Were this change to be implemented, we would likely improve on the formative
BLAST session, perhaps by switching to a team-based learning approach, in order to use this time to
develop student proficiency with the ‘live’ BLAST software so that this aspect of the e-assessment is
improved, rather than diminished due to the ‘offline’ nature of the summative exam.
One other area for development for this assessment is to create a mobile learning version of the
practice version of this assessment, using rich formative feedback to support the students during the
learning process. Data gathered on the Medical Genetics module as part of an HEA-funded Individual
Teaching Development project has indicated considerable interest in such a tool among these
students, and work is already underway to develop and implement a novel, interactive mobile
learning resource that students will be able to access any time, in any surrounding in order to
engage with the module learning outcomes.
Conclusion
In conclusion, it has been possible to use the BLAST tool and Questionmark Perception online
assessments to devise a test which can be delivered securely and administered economically to large
groups of students, and marked automatically. In addition, there is evidence that this test is a valid,
authentic and reliable assessment of the learning objectives for final year undergraduates which
challenges the students at an appropriate level. The results of surveys of student opinion, and
positive feedback from student module feedback questionnaires also suggest that the test is
effective in terms of face validity.
The conclusions from this study are of potential interest and relevance to lecturers across a range of
disciplines or professional fields: in particular, careful design of online assessment questions, in
conjunction with integration with real life authentic online tools, can enable instructors to assess
complex higher order understanding in a valid, reliable and practical way, and can require students
to demonstrate skills relevant for the work place.
References
Allen, D. and Tanner, K. (2005) Infusing active learning into the large-enrollment biology class: seven
strategies, from the simple to complex, Cell Biology Education, 4(4), 262-268.
Biggs, J.B. (1999) Teaching for quality learning at University – what the student does. Buckingham:
SRHE and Open University Press.
Bloom, B. S. (1956) Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. New
York: David McKay Co Inc.
Boud, D. (2000) Sustainable assessment: rethinking assessment for the learning society, Studies in
Continuing Education, 22(2), 151-167.
Boud, D. and Associates (2010) Assessment 2020: Seven Propositions for Assessment Reform in
Higher Education. Sydney: Australian Learning and Teaching Council.
Boud, D. and Falchikov, N. (2006) Aligning Assessment with long-term learning, Assessment and
Evaluation in Higher Education, 31(4), 399-413.
Brown, S. and Knight, P. (1994) Assessing Learners in Higher Education. London: Kogan Page.
Bryan, C. and Clegg, K. (eds.) (2006) Innovative Assessment in Higher Education. London: Routledge.
Bull, J. and McKenna, C. (2004) Blueprint for computer-assisted assessment. London:
RoutledgeFalmer.
Crisp, G.T. (2007) The e-Assessment Handbook. London: Continuum.
DERMO & BOYNE: ASSESSING UNDERSTANDING OF COMPLEX LEARNING OUTCOMES AND REALWORLD
SKILLS USING AN AUTHENTIC SOFTWARE TOOL: A STUDY FROM BIOMEDICAL SCIENCES.
112
Dermo, J. (2009) e-Assessment and the student learning experience: A survey of student perceptions
of e-assessment, British Journal of Educational Technology, 40(2), 203-214.
Dermo, J. (2011) Technology Enhanced Assessment for Learning: Case Studies and Best Practice.
Briefing Paper for the Higher Education Academy. Available at:
http://www.heacademy.ac.uk/assets/documents/learningandtech/Bradford_Briefing_Report
_8_Dec_2010.pdf (Accessed: September 2013).
Gibbs, G. and Simpson, C. (2004) Conditions under which assessment supports students' learning,
Learning and Teaching in Higher Education, 1(1), 3-31.
Herreid, C.F. (2013) ConfChem Conference on Case-Based Studies in Chemical Education: The Future
of Case Study Teaching in Science, Journal of Chemical Education, 90(2), 256-257.
JISC (2007) Effective Practice with e-Assessment. Bristol: JISC.
JISC (2010) Effective Assessment in a Digital Age: A guide to technology enhanced assessment and
feedback. Bristol: JISC.
Jordan, S. (2013) E-assessment: Past present and future, New Directions, Articles ASAP. Available at:
http://journals.heacademy.ac.uk/doi/full/10.11120/ndir.2013.00009 (Accessed: September
2013).
Kline, P. (1999) The handbook of psychological testing. London: Routledge.
McAlpine, M. and Hesketh, I. (2003) Multiple response questions - allowing for chance in authentic
assessments, in Christie, J. (ed.) Proceedings of the 7th International CAA Conference.
Loughborough University, Loughborough.
Madden, T. (2002) NCBI Handbook. Available at: http://www.ncbi.nlm.nih.gov/books/NBK21097/
(Accessed: September 2013).
Mustoe, L.R. and Croft, A.C. (1999) Motivating Engineering Students by Using Modern Case Studies,
European Journal of Engineering Education, 15(6), 469-476.
National Student Forum (2009) National Student Forum Annual Report. Available
at:http://www.bis.gov.uk/assets/BISCore/higher-education/docs/N/09-p83-national-studentforum-
annual-report-09.pdf (Accessed: September 2013).
National Union of Students. (2009) Assessment Purposes and Practices. NUS briefing paper.
Nicol, D. and Macfarlane-Dick, D. (2006) Formative assessment and self‐regulated learning: a model
and seven principles of good feedback practice, Studies in Higher Education. 31(2), 199-218.
Nicol, D. (2007) E-assessment by design: using multiple-choice tests to good effect, Journal of
Further and Higher Education, 31(1): 53–64.
Olson, B.L. and McDonald, J.L. (2004) Influence of online formative assessment upon student
learning in biomedical science courses, Journal of Dental Education. 68, 656-659.
Pring, R. (2004) Philosophy of Educational Research. London: Continuum.
QAA (2011) The UK Quality Code for Higher Education. Available at:
http://www.qaa.ac.uk/AssuringStandardsAndQuality/quality-code/Pages/default.aspx
(Accessed: September 2013).
Ramsden, P. (1999) Learning to Teach in Higher Education. London: Routledge.
Richardson, T., Stafford-Fraser, Q., Wood, K.R. and Hopper, A. (1998) Virtual network computing,
IEEE Internet Computing, 2, 33-38.
Sivan, A., Leung, R.W., Woon, C.C. and Kember, D. (2000) An implementation of active learning and
its effect on the quality of student learning, Innovations in Education and Training
International, 37, 381-389.
Thin, A. (2006) Using Online Microassessments to Drive Student Learning, Bioscience Education, 7.
Available at: http://journals.heacademy.ac.uk/doi/abs/10.3108/beej.2006.07000008
(Accessed: September 2013).

الوقت من ذهب

اذكر الله


المصحف الالكتروني