د/ايمان زغلول قاسم

استاذ تكنولوجيا التعليم المساعد بكلية التربية بالزلفي

resaerch 8


APPLICABILITY OF E-ASSESSMENT IN IRAN AS AN EFL
CONTEXT: FROM FANTASY TO REALITY
Abouzar Shojaei (Corresponding author)
Deprtment of English Language, Fars Science and Research, Branch, Islamic Azad University,
Fars, Iran
Email:aboozar_shojaei1361@yahoo.com
Abbas Motamedi
English Department, Islamic Azad University, Kazerun Branch, Kazerun, Iran
Email:Ab.motamedi@gmail.com
ABSTRACT
Language assessment, in line with theoretical changes in language and psychology, has undergone
major shifts both in theory and practice. Unlike traditional discrete point testing that strived for
maximally objective measurement procedures without reference to particular teaching and
learning situations, recent approaches to language assessment have viewed assessment as an
integral part of the teaching and learning processes. Emergence of formative assessment, dynamic
assessment, and E-assessment could be attributed to such recent developments. Given the fact that
E-assessment is the least understood and researched especially in EFL, English as a Foreign
Language, contexts, the present article tries to review the underlying motives for the emergence
and application of E-assessment to come up with context specific decisions over the applicability
such an assessment procedure in Iran as an EFL context. The usefulness of E-assessment needs to
be determined through weighting the advantages and disadvantages with reference to particular
testing situations.
KEYWORDS: E-assessment; EFL assessment
PRELIMINARIES
Haken, (2006) explains that assessment is an integral piece to assuring that an educational
institution achieves its learning goals, as well as a crucial means of providing the essential
evidence necessary for seeking and maintaining accreditation. Hersh, (2004) advocates the
position that assessment of student learning should be considered an integral part of the teaching
and learning processes as well as part of the feedback loops that serves to enhance institutional
effectiveness. Good assessment serves multiple objectives (Swearington, n.d.) and benefits a
number of stake-holders (Love & Cooper, 2004). According to Dietal, Herman, and Knuth (1991)
assessment provides an accurate measure of student performance to enable teachers,
administrators, and other key decision makers to make effective decisions. Kellough and Kellough
(1999) identified seven purposes of assessment:
1-Improving student's learning;
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
113
2-Identifying students’ strengths and weaknesses;
3-Reviewing, assessing, and improving the effectiveness of different teaching strategies;
4-Reviewing, assessing, and improving the effectiveness of curricular programs;
5-Improving teaching effectiveness;
6-Providing useful administrative data that will expedite decision making; and
7-Communicating with stakeholders (p.56).
Most individuals in the assessment community believe that the assessment process begins with the
identification of learning goals and measurable objectives (Martell & Calderon, 2005) as well as
the use of specific traits that help define the objectives being measured (Walvoord & Ander-son,
1998). These traits are frequently correlated with the developmental concepts articulated in
Bloom’s Taxonomy of Educational Objectives which provides a recognized set of hierarchical
behaviors that can be measured as part of an assessment plan (Harich, Fraser, & Norby, 2005).
Assessment is not new to academia, with the roots of the current movement dating back over two
decades (Martell & Calderon, 2005). But two decades hardly take us back to the origins of
educational assessment. According to Pearson, Vyas, Sensale, and Kim (2001), assessment of
student learning has been gaining and losing popularity for well over 150 years. In K-12
education, assessment first emerged in America in the 1840’s, when an early pioneer of
assessment, Horace Mann, used standardized written examinations to measure learning in
Massachusetts (Pearson et al., 2001). After losing momentum, the scientific movement of the
1920’s propelled the use of large-scale testing as a means of assessing learning (Audette, 2005).
The 1960’s saw further support of standardized testing when the National Assessment of
Educational Progress was formed, which produced the Nation’s Report Card (Linn, 2002). But
perhaps no initiative has had as broad and pervasive an impact as the No Child Left Behind Act of
2001 (NCLB), which formally ushered us into an age of accountability. The NCLB act is a
sweeping piece of legislation that requires regularly administered standardized testing to
document student performance. The NCLB act is based on standards and outcomes, measuring
results, and holding schools accountable for student learning (Audette, 2005). In 2006, Congress
is required to reauthorize the Higher Education Act and it is predicted that NCLB will lead to
changes in Higher Education Assessment requirements (Ewell & Steen, 2006). In higher
education, the first attempts to measure educational outcomes emerged around 1900 with the
movement to develop a mechanism for accrediting institutions of higher education (Urciuoli,
2005). In 1910, Morris Cooke published a comparative analysis of seven higher education
institutions including Columbia, Harvard, Princeton, MIT, Toronto, Haverford, and Wisconsin.
The result of the report was the establishment of the student credit hour as the unit by which to
calculate cost and efficiency (Urciuoli, 2005). By 1913, accreditation in higher education had
spread nationwide with the formation of a number of accrediting bodies (Urciuoli, 2005). The
United States is unusual in that it relies on private associations rather than government agencies to
provide accreditation of academic institutions and programs. A number of reports released in the
mid 1980’s charged higher education to focus on student learning (Old Dominion University,
2006). During that time, the first formal assessment group was established, the American
Association for Higher Education (AAHE) Assessment Forum, formed in 1987. In 1992,
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
114
accrediting agencies were required to consider learning outcomes as a condition for accreditation
following a 1992 Department of Education mandate (Ewell & Steen, 2006).
Assessment experts point to pioneers of the assessment movement, Alverno College and Northeast
Missouri State University, which have both been committed for over three decades to outcomes-
based instruction. Kruger and Heisser (1987) who evaluated the Northeast Missouri State
University assessment program found that the variety of assessments and questionnaires employed
as well as the use of a longitudinal database that provides multivariate analysis makes this
institution an exemplar in the effective us of quality assessment to support sound decision making.
The oldest recognized undergraduate assessment program in the United States can be found at the
University of Wisconsin, which has reported on some form of student outcomes assessment
continuously since 1900 (Urciuoli, 2005).
The assessment movement is not limited to the United States. In the United Kingdom, the Higher
Education Funding Council was established following the Further and Higher Education Act of
1992, requiring the assessment of quality of education in funded institutions. In 2004, the Higher
Education Act was passed with the goal of widening access to higher education as well as keeping
UK institutions competitive in the global economy (Higher Education Funding Council for
England, 2005). The formation of the Europe Union has created a need for the communication of
educational quality. According to Urciuolo (2005) educational discourse in Europe and the UK
are becoming dominated with the terms standards and accountability, which were born and have
been growing within the United States for many years.
APPROACHES TO ASSESSMENT
Petkov and Petkova (2006) recommend course-embedded assessment as having the advantage of
ease of implementation, low cost, timeliness, and student acceptance and note that the type of
performance appraisal supported by rubrics is particularly effective when assessing problem
solving, communication and team working skills. They explain that rubrics should not be
considered checklists but rather criteria and rating scales for evaluation of a product or
performance. According to Aurbach (n.d.), rubrics articulate the standards by which a product,
performance, or outcome demonstration will be evaluated. They help to standardize assessment,
provide useful data, and articulate goals and objectives to learners. Rubrics are also particularly
useful in assessing complex and subjective skills (Dodge & Pickette, 2001).
Petkov and Petkova (2006) who implemented rubrics in introductory IS courses found that the use
of rubrics helped to make assessment more uniform, better communicate expectations and
performance to students, measure student progress over time, and help to lay the foundation for a
long-terms assessment program that combines projects and portfolios. They argued that measuring
students’ knowledge, strengths, and weaknesses prior to instruction is done through diagnostic
testing. Diagnostic assessment allows educators to remedy deficiencies as well as make curricular
adjustments.
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
115
Haken (2006) similarly explained that it is important to measure knowledge; however, measuring
knowledge is not enough. Hence, the current charge in education is to transform learning and
assessment from the world of memorized facts to a broad, well-rounded model that reflects the
learner-centered outcomes of an academic program (Wright, 2004). As a result, an academic
program should work on building as well as assessing students’ critical-thinking skills (Haken,
2006). According to Walcott (2005), who examined business education, examples of critical
thinking can be found in the creation of marketing plans, the interpretation of financial statement
ratios, the recommending of organizational restructuring, identifying and analyzing ethical issues,
case studies, evaluating a company’s strengths and weaknesses, and portfolio creation. Portfolios
can be used to assess learning-outcome achievement as well as to diagnose curriculum
deficiencies that require improvement (Popper, 2005). Popper explained that portfolios should
include a variety of samples of student work. According to the American Association of Higher
Education (2001), portfolios have a broad application in a variety of contexts for the collection of
meaningful evidence about learning outcomes. According to Chun (2002), a portfolio should
require students to collect, assemble, and reflect on samples that represent the culmination of their
learning. Cooper (1999) identified six considerations of the portfolio building process:
identification of skill areas, design of measurable outcomes, identification of learning strategies,
identification of performance indicators, collection of evidence, and assessment. Wiggins (1990)
suggests that work being assessed should be authentic or based on the real world. Pellegrino,
Chudonsky, and Glaser (2001) suggest that formative assessments focus less on student responses
and more on performance. As a result, many institutions are anchoring their assessment activities
into meaningful scenarios so that students are being assessed on their abilities to apply learning
into realistic situations.
E-ASSESSMENT USE
E−Assessment is often seen as providing a partial solution to providing assessment for increasing
numbers of students and declining staff to student ratios (Sim et al., 2004). In addition, students
may experience cognitive conflict because they are generally expected to word process essays and
engage in online tasks but use pens in examination halls (Brown et al, 1997) such that we are
training them in one system and testing them in another. Gipps (2003) reasons that if teaching
and its associated resources become electronic, then assessment too will need to take that route, to
ensure alignment between the modes of teaching and assessment.
Subjective judgment is always involved − when an educator creates a test they do so with their
internal biases on the type and nature of material. When the limit of the assessment and the type
and nature of the 'correct' answers are preset, the educator introduces their own judgment and bias
into the system from the start. However the extent of bias can be reduced because in e-assessment,
the judgments made are only made based upon the original criteria and not on 'human introduced
error' (e.g. marking at 2 a.m.) so that a second level of error is not introduced. In addition, levels
of correctness can be programmed into the system that can score partially correct marks in a more
consistent manner.
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
116
Advantages of E−Assessment
These are characteristics of good assessment skill and have links to a strong well evaluated
pedagogy, as well as providing maintain for both employees and students− and of course, online
assessment has all the other advantages of distant access and selection of time and place of
assessment (while the latter may be limited for summative assessments that require security).
When looking to use e-assessment, one can find that grading quickly is one of its strongest points.
Test feedback can be on a question by question basis and with the use of a 'knowledge tracking
system' and students can follow their evolution and self-determine their weaknesses and strengths
as well. Some of the advantages of e−assessment that one might want to consider are:
• direct feedback to students,
• allows rehearsal and revision,
• immediate feedback to employees,
• allows evaluation of a course's strengths and weaknesses,
• Can be connected to other computer−based or online materials.
CONCERNS AND ISSUES ASSOCIATED WITH E−ASSESSMENT
Implementation of E-assessment does not seem that realizable without a true and comprehensive
understanding of the issues associated with it. In fact E-assessment may not always be the first
priority in particular educational contexts. It usefulness should be determined with reference to
particular situations so this section is allocated to the issues relevant to electronic assessment.
Time Required
One of the claims most often made for e-assessment is that it saves time. This is perfectly true at
the point of delivery− it is possible to process the results for a summative assessment for a class
of, for example around 700 students within a couple of hours of the last one logging off, including
error checking and results analysis. This has to be balanced against the time, and skill, needed to
create the assessment in the first place. This may not be so important for formative assessments,
which can be discussed with students later (and where failings may actually be of educational
interest) but it is obviously vital that an end−of−course assessment should be reliable. The time
and expertise for this cannot be underestimated, nor the need for 'shredding and vetting' by
colleagues. There are times when an open−ended exercise (whether we call it an essay, project or
report) may be more suitable for your purposes. There is of course no reason why this cannot be
delivered online, with students uploading written materials into virtual learning environments to
be assessed off−line.
Misleading Clues
There is a danger that by picking out particular areas (either deliberately or inadvertently), the
quizzes could send misleading clues to students about what is and isn't important. This is
exacerbated by the students' tendency to be very strategic and exam−focused when considering
how best to spend their study time.' (Clarke, et al, 2004: 253)
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
117
Equity and Diversity
Equity and diversity− when computers are involved in the assessment process, there are equity
issues for different student groups relating to language status and gender and issues around
computer anxiety and exam equivalence. Brosnan (1999, p. 48−49) suggests that: 'computer
anxiety can lead to simplification of conceptual reasoning, heightened polarization and extremity
of judgment and pre-emption of attention, working memory and processing resources. Individuals
high in computer anxiety will therefore underperform in computer-based versions of assessment'.
Brosnan (1999) asserts that even those who are using computers effectively will still exhibit
computer anxiety and he contends that female students exhibit higher levels of anxiety, and so
poorer levels of performance. Ricketts and Wilks (2002) suggest that student performance in tests
should be monitored to ensure fairness and consistency when there are any changes in delivery,
whether this is a change to CAA or changes in the way that the CAA is presented.
Issues of Equivalence
The issues of equivalence between different forms of assessment are highlighted by Clariana and
Wallace (2002), who assert that you cannot necessarily expect that equivalent measures of student
learning will be produced from computer-based and paper-based tests, even if you use the same
questions. They assume that the 'test mode effect' will diminish when students become as familiar
with the medium of the computer as they are with paper, for assessment, and that computer
familiarity might be an issue for some groups of students. McDonald and Twining (2002) concurs,
expressing the belief that inconsistent findings relating to student scores in computer-based and
paper-based tests often result from different levels of exposure to changing technologies. It is
probably fair to observe generally that students perform differently under different conditions of
assessment, and that innovations in CAA simply introduce a new range of variants on this
construct theme.
It Attracts Greater Scrutiny
While problems with objective testing can occur whether the tests are offered on paper or online,
it is the online testing that tends to attract greater scrutiny. Don Mackenzie in Brown et al. (1997,
p.217) contends that CAA has produced quality and efficiency gains in assessment, but for many
there have been marginally lower pass rates than for essay−type assessments. He suggests that the
reason is that there is a larger spread of marks (typically a standard deviation of 15 per cent with a
mean of 50 per cent).
Design of Questions
Problems in the use of computers for multiple choice questions could derive from the design of
the questions and the skills of the designer (Mackenzie, 2003), rather than from the software or
the use of the computer per se, or it could be that some tutors may be reluctant to relinquish
traditional modes of assessment (Mackenzie, 2003).
Disparity
Research by Clariana and Wallace (2002) has shown that the use of CAA has a positive impact on
the test scores of high attaining pupils, when compared to those from paper-based tests, because
they assert that higher-attaining students more quickly adapt to new assessment approaches.
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
118
Noyes, et al. (2004) suggests that lower-performing individuals will be disadvantaged when CAA
is used because they assert that a greater work load and additional effort is required to complete a
computer−based test.
Change in Working Practices
The savings in time that might be produced by the automated marking in CAA are instead shifted
to the design and construction of the assessment activity, (including the level and amount of
feedback to be given). Brown, et al. (1997) sees this as a profound change in working practices for
academics. There is also the issue of defining requisite technical skills for students undertaking
CAA such as, who should be involved in that training, and when should it take place, especially in
the context of overloaded curricula, (Weller, 2002). Macdonald and Twining, (2002) found that
their students only became competent in the use of a particular piece of software while they were
completing an assignment that required its use.
Plagiarism
Plagiarism is a concern for many thinking of using CAA, (Weller, 2002); but Rovai, (2000) and
Carroll, (2002) suggest that assessment design is the key to deterring plagiarism. O'Hare and
Mackenzie, (2004) assert that there is a level of imagination and rigor required for the design of
assessment online compared to that for more traditional forms of assessment. Weller et al, (2002)
suggests that the use of portfolios can help to counter plagiarism, as these places less reliance on
single assessment items. The JISC funded Plagiarism Advisory Service gives advice and guidance
on plagiarism prevention.
Off−Campus Assessment
Computer software for CAA allows for questions to be presented to students in different orders,
with distracters in different orders, and if sufficient questions have been compiled of sufficient
integrity then they can sit different tests. All of this allows for students to sit in adjoining desks in
computer laboratories that will at other times be used for learning activities. This is fairly
straightforward for on-campus students, but could be more problematic for students taking courses
at a distance. However, Rovai (2000) suggests that this difficulty can be overcome by using
'proctored testing' where academics arrange for students to sit online assessments under test
conditions in alternative venues.
Reasons for using E−Assessment
Bull and McKenna, (2004: page 3) suggest a number of reasons that academics may wish to use
CAA:
1. To increase the frequency of assessment, thereby, motivating students to learn, encouraging
students to practice skills.
2. To broaden the range of knowledge assessed.
3. To increase feedback to students and lecturers.
4. To extend the range of assessment methods.
5. To increase objectivity and consistency.
6. To decrease marking loads.
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
119
7. To aid administrative efficiency.
Nichol and Macfarlane, Dick (2005; 2004) identified from the research literature seven principles
of good feedback practice that could support learner self−regulation − active control by students
of some aspects of their own learning. Nichol and Milligan (2006) have taken this further to show
how e-assessment can support these seven principles by providing: Timely feedback,
opportunities for re-assessment and continuous formative assessment to encourage students'
self−esteem, statistics to help tutors evaluate the effectiveness of the assessment− questions
answered very poorly can be re-examined in case poorly specified, timely information for tutors to
be able to help shape teaching.
Peer assessment is attractive for an umber of reasons. (Topping’s 1998 reviewing demonstrated
that it is associated with gains on conventional performance measures, in higher education.)
Students can be asked to create far more pieces of work than could be marked by a single tutor. It
can avoid the problem that as a class size gets bigger, the load on the tutor increases directly,
along with the time taken to provide feedback to students. Students must understand criteria for
assessment, and must acquire a range of higher-order skills, such as abstracting ideas, detecting
errors and misconceptions, critiquing and suggesting improvements.
Assessment and Education
Assessment is central to the practice of education. For students, good performance on ‘highstakes’
assessment gives access to further educational opportunities and employment. For teachers
and schools, it provides evidence of success as individuals and organizations. Assessment systems
are used to measure individual and organizational success, and so can have a profound driving
influence on systems they were designed to serve.
There is an intimate association between teaching, learning and assessment, illustrated in below
figure. Robitaille, et al, (1993) distinguish three components of the curriculum: the intended
curriculum (setout in policy statements), the implemented curriculum (which can only be known
by studying classroom practices) and the attained curriculum (which is what students can do at the
end of a course of study). The links between these three aspects of the curriculum are not
straightforward. The ‘top-down’ ambitions of some policy makers are hostages to a number of
other factors. The assessment system tests and scoring guides- provides a far clearer definition of
what is to be learned than does any verbal description (and perhaps provides the only clear
definition), and so is a far better basis for curriculum planning at classroom level than are grand
statements of educational ambitions. Teachers’ values and competences also mediate policy and
attainment; however, the assessment system is the most potent driver of classroom practice.
This is a well-established technology, particularly well- suited to assessing declarative knowledge
("knowing that") in well-defined domains. Developing tasks to identify student misconceptions is
also possible. It is harder to assess procedural knowledge ("knowing how"). MCT is unsuited to
eliciting student explanations, or other open responses. MCT have the great advantage that they
can be very cheap to create and use. Some of this cheapness is illusory, because the costs of
designing good items can be high. Over-use of MCT can be very expensive, if it leads to a
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
120
distortion of the curriculum in favor of atomized declarative knowledge, divorced from conceptual
structures that students can use to work on the world, effectively. MCT are used extensively for
high-stakes assessment, and are presented increasingly via the web. For example, web-based highstakes
tests are now available; the Graduate Record Examination (GRE), used by many colleges to
determine access to Graduate School in many colleges, is available online.
Creating more authentic paper and pencil tests
It makes sense to allow students access to the tools they use in class, such as word processors, and
that professionals use at work, such as graphing tools and modeling packages during testing. It
makes no sense at all to always forbid students to use ‘tools of the trade’ when being assessed. Elearning
changes the nature of the skills required. E-assessment allows examiners to focus more
on conceptual understanding of what needs to be done to solve problems, and less on telling
students what to do, then assessing them on their competence in using the manual techniques
required to get the answer.
A complete reliance on paper-based assessment has a number of drawbacks; first is that such
assessments are increasingly ‘inauthentic’ as classroom and professional practices embrace ICT.
Second is that such assessments constrain progress, and have a negative effect on students who
have to learn (just for the exam) how to do things on paper that are done far more effectively with
ICT. A third major constraint is that current innovative suggestions for curriculum reform, which
rely on student portfolios for their implementation, will be impossible to manage on a large scale
without extensive use of ICT.
E-assessment is a stimulus for rethinking the whole curriculum, as well as all current assessment
systems. E-assessment provides a cost-effective way to integrate high quality portfolio assessment
with externally set and marked tests, in any combination. This makes it likely that there will be
significant changes in the structure of summative assessments, because of the range of student
attainments that can now be assessed reliably. There is likely to be extensive use of teacher
assessment of those aspects of performance best judged by humans (including extended pieces of
work assembled into portfolios), and more extensive use made of on-demand tests of those aspects
of performance, which can be done easily by computer, or which are done best by computer.
CONCLUSION
This article has not real reflected potential pitfalls application of e-assessment in nowadays
educational system of Iran. However, a single study cannot unravel the different effects eassessment
on the educational system at the micro and macro levels; in fact it is very difficult on
the basis of this review study to prove that exactly which factors actually cause impediment to the
applicability of e-assessment in Iran. The major dilemma ahead of practitioners is the inauthentic
nature of e-assessment. Recent communicative approaches to testing have advocated the use of
authentic tests as reliable predictors of testes’ future performances in academic and non academic
situations. Bachman (1990) recognized 2 different approaches to authenticity: real-life approach
(RL) and interactional-ability approach (IA).E-assessment is more at odds with RL approach
which intends to design test tasks that are situation ally as much as possible similar to non test real
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
121
life tasks. Testers need to make up their minds and determine if e-assessment should be
emphasized at the expense of authenticity.
Another challenge in EFL contexts is the applicability of e-assessment in terms of facilities
available in institutions, educational policies, and its face validity. According to Bachman and
Palmer (1996), total abandonment of any of the test qualities is not legitimate. The testers, instead,
need to maximize the overall usefulness of the test. E-assessment is not of course an exception.
REFERENCES
American Association for Higher Education (2001). Electronic portfolios: Emerging practices for
students, faculty and institutions. Retrieved 2/28/06 from:
http://aahe.ital.utexas.edu/electronicportfolios/index.html
Audette, B. (2005). Beyond curriculum alignment: How one high school is using student
assessment data to drive curriculum and instruction decision making. Retrieved from:
http://www.nesinc.com/PDFs/2005_07Audette.pdf
Backman,L.F.(1990). Fundamental Considerations in Language Testing. Oxford: Oxford
University Press.
Backman, L.F., & Palmer, A.S.(1996). Language Testing in practice. Oxford: Oxford University
Press.
Brosnan, M. (1999). Computer anxiety in students: Should computer−based assessment be used
at all? In Brown, S., Race, P. and Bull, J. (1999) (Eds), Computer−assisted assessment in
higher education. London: Kogan−Page.
Brown, G., Bull, J., & Pendlebury, M. (1997). Assessing student learning in higher
education.London: Routledge.
Bull, J., & McKenna, C. (2004). Blueprint for Computer−assisted Assessment. London:
Routledge Falmer.
Carroll, J. (2002). A Handbook for Deterring Plagiarism in Higher Education. Oxford: Oxford
Centre for Staff and Learning Development.
Chun, M. (2002). Looking where the light is better: A review of the literature on assessing higher
education quality. Peer Review. Winter/ Spring.
Clariana, R. B., & Wallace, P.E. (2002). Paper−based versus computer−based assessment: Key
factors associated with the test mode effect. British Journal of Educational Technology, 33
(5), 595−904.
Cooper, T. (1999). Portfolio assessment: A guide for lecturers, teachers, and course designers.
Perth: Praxis Education.
Dietal, R. J., Herman, J. L., & Knuth, R. A. (1991). What does research say about assessment?
North Central Regional Educational Laboratory. Retrieved 3/27/06 from:
http://www.ncrel.org/sdrs/areas/stw_esys/4assess.htm
Dodge, B., & Pickette, N. (2001). Rubrics for Web lessons. Retrieved 2/12/06 from:
http://edweb.sdsu.edu/webquest/rubrics/weblessons.htm
Ewell, P., & Steen, L. A. (2006). The four A’s: Accountability, accreditation, assessment, and
articulation. The Mathematical Association of America. Retrieved 3/13/06 from:
http://www.maa.org/features/fouras.html
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
122
Gipps, C. (2003). Should universities adopt ICT−based assessment Exchange .Spring 2003.
26−27.
Haken, M. (2006). Closing the loop-learning from assessment. Presentation made at the
University of Maryland Eastern Shore Assessment Workshop. Princess Anne: MD.
Harich, K., Fraser, L., & Norby, J. (2005). Taking the time to do it right. In K. Martell & T.
Calderon, Assessment of student learning in business schools: Best practices each step of the
way, 1( 2), 119-137. Tallahassee, Florida: Association for Institutional Research.
Hersh, R. (2004). Assessment and accountability: Unveiling value added assessment in higher
educa-tion. A Presentation to the AAHE National Assessment Conference. June 15, 2004.
Denver: Colorado.
Higher Education Funding Council for England (2005). About us: History. Retrieved from: from
http://www.hefce.ac.uk/aboutus/history.htm
Kellough, R.D., & Kellough, N.G. (1999). Secondary school teaching: A guide to methods and
resources; planning for competence . Upper Saddle River, New Jersey Prentice Hall.
Kruger, D. W., & Heisser, M. L. (1987). Student outcomes assessment: What institutions stand to
gain. In D.F. Halpern (Ed.), Student outcomes assessment: What institutions stand to gain?
New Directions for Higher Education (pp. 45-56). San Francisco: Jossey-Bass.
Linn, R. (2002). Assessment and accountability. Educational Researcher, 29 (2), 4-16.
Love, T., & Cooper, T. (2004). Designing online information systems for portfolio-based
assessment: De-sign criteria and heuristics. Journal of Information Technology Education 3,
65-81. Available at http://jite.org/documents/Vol3/v3p065-081-127.pdf
Macdonald & Twining. (2002). 'Assessing activity based learning for a networked course'. British
Journal of Educational Technology, 33 (5), 605−620.
Mackenzie, D. (2003). Assessment for E-learning: what are the features of an ideal E-assessment
system? In Christie, J. (Ed.) Seventh International Computer Assisted Assessment (CAA)
Conference Proceedings, Loughborough University, July 2003.
http://www.caaconference.com/.
Martell, K., & Calderon, T. (2005). Assessment of student learning in business schools: What it is,
where we are, and where we need to go next. In K. Martell and T. Calderon, Assessment of
student learning in business schools: Best practices each step of the way, 1(1), 1-22.
Tallahassee, Florida: Association for Institutional Research.
Nicol, D. J., & Milligan, C. (2006). Rethinking technology-supported assessment in terms of the
seven principles of good feedback practice. In Bryan, C. and Clegg, K. Innovative Assessment
in Higher Education, London: Routledge, Taylor and Francis Group.
Noyes, J. M., Garland, K. J., & Robbins, E. (2004). Paper−based versus computer−based
assessment − is workload another test mode effect? British Journal of Educational
Technology 35(1), 111−113.
O'Hare, D., & Mackenzie, D. M. (2004). Advances in Computer Aided Assessment. Staff and
educational Development Association Ltd, Birmingham, UK.
Old Dominion University. (2006). The history of assessment at Old Dominion University.
Retrieved from: http://www.odu.edu/webroot/orgs/ao/assessment.nsf/pages/history_page
Pearson, D., Vyas, S., Sensale, LM, & Kim, Y. (2001). Making our way through the assessment
and accountability maze: Where do we go now? The Clearing House, 74(4), 175-191.
International Journal of Language Learning and Applied Linguistics World
(IJLLALW)
Volume
5
(1),
January
2014;
112-­‐123
Shojaei,
A.,
&
Motamedi,
A
ISSN
(online):
2289-­‐2737
&
ISSN
(print):
2289-­‐3245
www.ijllalw.org
123
Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing What Students Know: The Science
and Design of Educational Assessment. In N. R. C. Center for Education (Ed.). Washington,
D.C.: National Academy Press.
Petkov, D., & Petkova, O. (2006). Development of scoring rubrics for IS projects as an
assessment tool. Issues in Informing Science and Information Technology Education 3, 499-
510. Available at http://informingscience.org/proceedings/InSITE2 006/IISITPetk214.pdf
Popper, E. (2005). Learning goals: The foundation of curriculum development and assessment. In
K. Martell & T. Calderon, Assessment of student learning in business schools: Best practices
each step of the way 1(2), 1-23. Tallahassee, Florida: Association for Institutional Research.
Ricketts, C., & Wilks, S. J. (2002). Improving student performance through computer−based
assessment: insights from recent research. Assessment and Evaluation in Higher Education,
27 (5), 475−479.
Rovai, A. P. (2000). 'Online and traditional assessments: What is the difference?' The Internet and
Higher Education, 3(3), 141−151.
Rovai, A.P. (2004). A constructivist approach to online college learning. Internet and Higher
Education, 7(2), 79–93.
Sim, G., Holifield, P., & Brown, M. (2004). Implementation of computer assisted assessment:
lessons from the literature. ALT−J, Research in Learning Technology, 12 (3), 216−229.
Urciuoli, B. (2005). The language of higher education assessment: Legislative concerns in a
global context. Indiana Journal of Global Legal Studies, 12 (1), 183-204.
Walcott, S. (2005). Assessment of critical thinking. In K. Martell & T. Calderon, Assessment of
student learning in business schools: Best practices each step of the way, 1(1), 130-155.
Tallahassee, Florida: Association for Institutional Research.
Walvoord, B. E., & Anderson, V. J. (1998). Effective grading: A tool for learning and assessment.
San Francisco: Jossey-Bass
Weller, M. (2002). Assessment Issues on a Web−based Course. Assessment & Evaluation in
Higher Education, 27(2), 108-1099.
Wiggins, G. (1990). The case for authentic assessment. Practical Assessment Research and
Evaluation, 2 (2).
Wright, B. (2004, October 1). An assessment planning primer: Getting started at your own
institution. Presentation Made at the 13th Annual Northeast Regional Teachi

الوقت من ذهب

اذكر الله


المصحف الالكتروني