Teaching and Learning > DISCOURSE

Diversifying Assessment 1: Essays and Examinations in Undergraduate History of Science

Author: Louise Jarvis and Joe Cain


Journal Title: PRS-LTSN Journal

ISSN:

ISSN-L:

Volume: 2

Number: 1

Start page: 24

End page: 57


Return to vol. 2 no. 1 index page


1. Introduction

Assessment in undergraduate history of science courses relies heavily on set essays and final examinations2. While these are useful for some developmental and assessment purposes, neither is an all-purpose tool. Most important, they concentrate attention on some learning processes but ignore others. If students do nothing but sit final examinations and write essays on set questions, some key and subject-specific skills may never be developed. Neither tool fairs well in campaigns to shift from passive to active learning environments or from summative to formative assessment3. Tutors tend to choose them for their familiarity rather than for their appropriateness within specific learning and teaching contexts (Knight and Edwards, 1995: 11).

Tutors often forget that most students direct their learning in courses primarily according to mandatory assessment tasks4. This fact places a tutor’s decisions about assessment at the heart of learning within courses. It teaches a harsh lesson: innovative teaching efforts are unlikely to succeed unless they are attached to assessment credit.

The project underlying this paper focused on a challenge to diversify assessment in our speciality within the practical constraints of an operating BSc degree programme in history and philosophy of science. We knew this was well-trodden ground in the education literature. Rather than re-invent the wheel, we undertook an extensive survey and synthesis project on alternative assessment techniques. In collating the material we collected, we identified practical advice on the design, implementation, and likely problems for the specific tools we might introduce into our overall assessment strategy.

This paper, the first in a series, considers two standard assessment tools: set essays and final examinations. What are their strengths and weaknesses? What kinds of adjustments might be introduced to improve diversification? What benefits can diversification bring?

2. Assessment built around learning objectives

Theories of assessment—why it should take place, how it guides knowledge acquisition, what forms it might take, and where it should be located within learning programmes—are reviewed authoritatively by Bloom, Hastings, and Madaus (1971), Brown and Knight (1994), and Brown, Bull and Pendlebury (1997). The LTSN Generic Centre provides a useful pamphlet series on assessment, including guides for building departmental assessment strategies (Mutch and Brown, 2001), for individual tutors (Brown, 2001), and for developing assessment portfolios (Baume, 2001). sake alone. When considering the utility of new assessment tools, several concerns came to the foreground.

First, at the heart of any degree programme or course structure should be a set of learning objectives. Setting these objectives requires a fundamental shift in emphasis (Table 1). Move away from the activity itself. Think about why you’re assigning a task in the first place. Focus on the central aims you want to pursue. Course aims tend to combine subject-specific goals and degree programme goals to produce a mixture of aims involving (1) content, (2) method, and (3) key skills (Gooday, 2002). Courses also set aims with respect to particular cognitive skills such as description, analysis, and synthesis (Biggs, 1999). Ideally, courses co-ordinate their aims within an overall programme and do so in a progressive fashion.

Table 1: Shifting from activity to objectives

Activity Write an essay
Aims Develop critical thinking.
Read sources on reading list
Objectives Compare and contrast X with Y.
Locate and evaluate the thesis in X.
Construct an argument in favour of X.

A distinction between aims and objectives is key. Aims state goals. Objectives translate goals into demonstrable outcomes by translating mental states into observable actions. This shift to operational thinking is crucial when choosing assessment tools (e.g., Beard and Hartley, 1984; Allan, 1996; Biggs, 1999; Brown, 2001). Explicit objectives are crucial for designing the tasks that make up an assessment programme. They help students understand what an assignment asks them to do. In the example in Table 1, a variety of observable tasks can be assigned to demonstrate the stated aims.

Aims and objectives should guide decisions about assessment activities, not the reverse. Set essays and final examinations should be abandoned where they prove poorly suited for the chosen combination of aims and objectives, or where they seem less well suited than alternatives.

Measuring the value of assessment tools requires distinguishing between two types of information that assessment can provide. Summative assessment provides one type of information, focusing on final tests of ability. For the tutor, summative assessment provides a measure for the overall extent of learning achieved by a student at the conclusion of a course or course unit. A final examination, set at the end of a term or year, is the classic example of summative assessment. This type of assessment has no expressed function other than to indicate a student’s ultimate learning achievement. Marking summative assessment requires little more than the production of grades and justifying comments. These leave little room for negotiation or constructive feedback to students.

Conversely, formative assessment monitors progress towards objectives. It is meant to diagnose relative strengths and weaknesses in this progress to assist both the tutor and student in their future provision of effort. As a developmental tool, formative assessment should involve the swift return of comments. These may originate from the students themselves, their peers, or the tutor. Commentaries may be extensive or limited, but they always should provide constructive steps ahead and be focused on the stated objectives.

In comparison, formative and summative assessment provide different information about the student’s skills in relation to the learning objectives for a course5. The choice between them depends on many factors. There is no reason why the two functions cannot be combined within a progressive sequence (e.g., when a summarily assessed project in one course provides formative value for a later course). Importantly, tutors commonly confuse the two sets of demands, particularly in marking essays and providing feedback (Ivanic, et al., 2000; Lea and Street, 2000).

Second, measuring the value of assessment tools involves special considerations peculiar to the specific teaching environment. For example, some courses provide a service role related to specific key skills. Alternatively, local resources (e.g., museum and galleries) might be especially well suited for use. Tutors might operate within specific constraints (e.g., large numbers of students, lack of prerequisites, short timetables, and so on), or they might seek to implement specific policies (such as breaking student routine with assessment, moving students out of their comfort zones, increasing student fairness, or promoting more imagination and key skills). These considerations both constrain and enable the types of assessment that might be applied in any particular situation.

Course design should be governed by decisions about learning objectives. These objectives also should guide the choice of assessment tool. Assessment can be designed either to build towards outcomes that satisfy these objectives or to test proficiency after an appropriate period of training. Without clearly defined objectives, assessment is activity without purpose. In this context, merely shifting the method of assessment does nothing to clarify its purpose. Moreover, such a shift may cause more harm than good. Some methods are the poor means for accomplishing certain ends.

3. Set essays

This section considers the strengths and weaknesses of set essays as an assessment tool. It also considers ways to adjust their standard design to broaden their overall value in assessment systems. In section 4, we consider alternative writing projects.

Set essays normally have two formats. Targeted essays ask the student to answer a specific question or consider a single narrow thesis in an essay of set length. A list of core readings often accompanies the task. Open-ended essays ask the student to create their own topic within loose confines, writing to a specific length.

Benefits: The QAA benchmark statement for history emphasises reading and writing skills as central to learning the subject.

History is largely a text-based discipline which requires students to learn to read widely, rapidly and critically, to take good notes, to digest arguments and to synthesise information quickly and intelligently. It also requires them to construct arguments in writing. (Fletcher, et al., 2000: 5)

The statement is emphatic. “We recommend that all single-honours students should be assessed in significant part on their essay-writing skills.” (Fletcher, et al., 2000: 6) No matter how assessment is designed, essay writing should remain “a central component” in the training of historians.

Writing promotes active learning and increases engagement of examined materials. It helps students develop a sense of voice as well as a sense of structure for both narrative and argument. Fletcher, et al (2000: 6) promote essays because they “require students to demonstrate a number of skills in combination” and develop “integrative high-order skills” (such as analysis and synthesis, as ranked by Biggs, 1999).

History of science courses often function in service roles to degree programmes in the sciences. In such contexts, writing assignments may present the only time within a degree that students are challenged to produce narrative or analysis in formats other than formalised laboratory reports. Composition and written expression become part of the key skills service these courses offer.

Targeted essays allow the student to focus specifically on reading and writing skills—though this rarely is made explicit as an aim of the project. This focus is especially useful in early stages of a degree and in courses where research skills are not expected to be well developed. Moreover, targeted essays simplify time management. When inclusive reading lists are provided (meaning the tutor expects the students to read no other materials), the student need not budget time for research into the total time they allow for their work. This opens opportunities for explicit skill development focusing on reading skills. Set reading lists also create a relatively uniform foundation of knowledge. This aids peer assessment and assessment by less knowledgeable assistants.

Open-ended essays normally aim to promote problem formulation and develop research skills. Because they assume the students already have some sense of the overall subject, these are more appropriate at intermediate and advanced levels of undergraduate work. Open-ended essays allow students to approach a topic creatively and make enquiries relevant to their own interests. They promote independent learning. Project choices also may augment other course work within a degree.

Recommendations and Implementation: Objectives underlying essays normally focus on research, reading, reasoning, and writing skills. Developing these skills requires training and explicit attention. Students should not be left simply to “get on with it”. Courses should use or create a tutorial process to support the skills tutors expect students to demonstrate.

On research skills, Gash (2000) provides broad coverage on literature searching. Students also may benefit from a targeted review by the tutor of local resources (especially electronic databases) and reference guides, such as the Isis Cumulative Bibliography and Dictionary of Scientific Biography). Do not assume students know even the most standard resources. Many academic libraries offer tutorials for general and subject-specific research skills. They also often offer advice on-line (such as Engle, 2001) and in reference documentation. Courses in subject-specific research methods are common in post-graduate programmes, with details available via course Web sites (key search terms are “research methods history”). The Internet provides many portals for historical research, such as Smith and Smith (2002), though these usually offer few resources immediately relevant for projects. Meta-search engines, such as google.com, can be more direct and up-to-date, though they require considerable sifting and training in effective search techniques. Hart (1998) provides an overview of the research process as it leads to an analysis of relevant literature for a project.

On reading skills, degree programmes should encourage active and critical reading, balancing breadth and depth. But any one course cannot do everything, and tutors need to make choices about the skills they aim to develop. Fairbairn and Fairbairn (2001) provide an exceptionally useful guide to reading skills overall. It is designed for student use. Northedge (1990) combines reading and note-taking skills. Booth, Colomb and Williams (1995) is useful for intermediate students refreshing their skills. Rael (2000) provides a basic on-line guide to the combined process.

Reasoning skills focus on critical thinking (Thomson, 1996). When reading, this involves skills such as locating a thesis, following an argument, and weighing different forms of evidence. When writing, this involves structuring texts to present a clear thesis and evidence within a sound argument. Fairbairn and Winch (1996) bring these three skill sets —reading, writing, and reasoning—together. Kuhn, Weinstock, and Flaton (1994) consider historical reasoning in terms of theory-evidence coordination. Voss, et al. (1994) demonstrates typical causal reasoning in history in a case study on the collapse of the Soviet Union. Hounsell (2000) distinguishes these essay tasks as argument, viewpoint, and arrangement. He encourages tutors to provide opportunities for students to develop each style.

On writing skills, a single course can focus on the complete writing process, or it can concentrate on particular refinements (such as organisation, citation, voice, punctuation, and so on). Many guides for essay writing describe and identify good practice. Crème and Lea (1997) is extremely useful and written for students. It extends the short treatment in Fairbairn and Winch (1996). Pirie (1985) offers additional support. Strunk and White (1979) is a classic for its conciseness. More advanced writers can profit from the advice manual for civil servants (Gowers, 1986). Specialised needs are served by style manuals, such as the Chicago manual of style or the MLA handbook for writers of research papers. To help develop their voice, students can be given an audience.

Tutors should promote a culture of improvement for writing skills. Revision is a key aspect of writing. Tutors can dedicate some of their feedback to particular features of composition and exposition. In a formative setting, substantial failings can be identified, and the student should be asked for a revision. In a summative setting, expectations should be noted in criteria for assessment and failings should be reflected in final marks.

Targeted essays should be set with specific expectations regarding reading lists. If a list is provided, tutors should make their expectations clear regarding the need for additional research. Criteria for assessment should identify the cognitive skills the tutor expects to observe. These might be prioritised. Students often benefit from studying a model essay produced for an analogous assignment. Peer assessment can be instructive both to students and tutors (Cheng and Warren, 2000; Race, 2001). If specific writing refinements are chosen as aims in a course, guidance should be set to these ends. Length should be prescribed and justified to students. For example, a short essay places a premium on space and forces students to prioritise. A longer essay increases expectations for elaboration and development.

Open-ended essays promote wide exploration and research. Tutors should make their expectations clear regarding collection and sifting of research material. Students may be tempted to think that more is better, i.e., longer bibliographies automatically receive higher marks. Including a formative stage for the production of working bibliographies (e.g., Kies, 2002) or including an annotated bibliography (e.g., Engle, et al., 1998; Sexty, 1999) in the final submission helps focus the research process and provides the tutor with evidence of accomplishment. This also provides students with a sense of credit for work done regardless what appears in the final submission. Research notebooks are good practice in the sciences (e.g., UFl OTL, 2001) but most on-line guidance seems hard to apply towards the research of typical historical projects (an exception is Davis, 1998).

Open-ended essays also require more supervision and project management. Many students find themselves lost when defining projects and settle for those that seem easy rather than those that are interesting or challenging. Others produce projects by foraging through library stacks or Internet sites. Others have difficulty narrowing the scope of a project to specific and do-able ranges within the provided time. Clear objectives and expectations are important for guidance. Class time dedicated to project definition helps students focus. Sample or model papers prove useful as guides.

Formative assessment—whether through self, peer, or tutor approaches—offers a mechanism for corrective action in open-ended essays. Bell (1987) provides guidance on time and project management for students. Larger essays can be divided into progressive stages of development with components assessed formatively.

Potential problems: Targeted essays are frequent objects for plagiarism, whether through essay writing services, foraging through printed and Internet material, or inheritance from previous generations of students. Stefani and Carroll (2001) provide a briefing on plagiarism, plus a useful bibliography for tutors. Carroll and Appleton (2001) provide excellent practical advice. Wilson (2000) supplements coverage and is especially useful for Internet issues. Strategies to minimise overt plagiarism include: regular changes to the list of set essays (so students in one or two subsequent years cannot inherit past work), checks of random samples of essays from each assignment against past papers and obvious sources (the existence of these checks should be advertised to students), and a quick check of all essays using a meta-search engine on the Internet (such as google.com). Students should be asked to retain their drafts, notes, and working bibliographies for evidence in case suspicions arise.

Open-ended essays should receive the same treatment. In these cases, confusion over appropriate use of sources (such as the difference between quoting and paraphrasing) and sloppiness when writing from notes seem to be frequent causes of inadvertent plagiarism. Tutors should make a point of discussing plagiarism concerns because students show considerable confusion about the boundaries between use and abuse of sources. Proactive work, peer assessment of sample cases of ambiguity (e.g., Northedge, 1990: 149–152) and a discussion of frequent causes of inadvertent problems (e.g., Cain, 2000) proves more effective than simply listing rules and regulations or threatening harsh penalties.

In a study of the comments tutors provide as essay feedback, Lea and Street (2000) argue tutors often conflate the aims of targeted and open-ended essays when choosing criteria for assessment. Likewise, student expectations of supervisors tend to differ from tutor expectations of their obligations when it comes to supervision and advice (Hampson, 1994; Phillips and Pugh, 1994).

4. Other types of writing

Writing assignments can vary in format, style and length. These pieces may include an element of role-play or develop other types of skills than those fostered by a standard essay. Designating specific audiences can focus writing projects in particular courses. The length of the written product can be extended to involve more detailed or synthetic work. In addition to the classic term paper, these might include policy reports, guides to materials for future researchers, research proposals, aids for lectures, and so on. Alternatively, writing assignments may be narrowed to focus on specific skills. These types may include newspaper articles or letters, book reviews (or exhibit or media reviews) for journals or other formats, synopses or executive briefs, and so on. Creative alternatives include the production of imagined communications between contemporary historic figures or the construction of journals or diaries in the voices of relevant historical actors (e.g., Chang, 2002). Student writing need not appear in finished and refined forms to develop the relevant skills. Personal journals and reading logs, for example, document creative and critical thinking (several examples are provided in Rusnock, 1999).

Benefits: Aims and objectives commonly chosen for set essays can be accomplished through writing projects of many kinds. Moreover, different writing formats allow tutors to concentrate on specific skills or learning outcomes. For example, evaluation skills can be demonstrated just as clearly in a 1,000 word book review, written so it could appear in an academic journal, as they can be in a routine 2,500 word targeted essay. Indeed, the reduced word count forces students to set priorities and to keep their writing focused. It also gives them more time to think about their actual presentation. A specific audience for the writing can improve the student’s sense of voice and direction. Fewer words per essay also reduce the overall volume of material a tutor must read while marking.

The familiarity of set essays seems to be the key factor limiting a tutor’s choice regarding types of writing assignments (Knight and Edwards, 1995: 11). The QAA benchmark statement for history recommends more than set essays in their discussion of assessment:

Students should be expected to undertake a wide range of assignments (such as seminar and group presentations, reports, reviews, gobbets or document papers, essays of varying lengths, C & IT projects, dissertations). It should be explained to students how such assignments enable them to improve their writing and oral-communication skills, as well as those of evidence-handling, the critical treatment of themes/historical arguments and the thoughtful, persuasive presentation of their work.
…We recommend that all departments should give serious consideration to the provision of opportunity for single-honours students to be assessed by essays of various types (as, for example, ‘long’ essays reflecting depth of scholarship, ‘short’ essays requiring precision of focus; essays focusing on different historical concepts - change, cause, similarity and difference etc.; essays written to a target length and essays written to time). (Fletcher, et al., 2000: 5–6)

Lea and Stierer (2000) consider the wider role of writing skills in higher education and the relation between writing projects and various forms of academic literacy.

Alternative formats for writing assignments also break the routine of set essays and challenge writing skills. This prevents over-specialisation by students, thus increasing the reliability of assignments as tests.

Lea and Street (2000) discuss student ability to monitor tutor expectations and adjust their skills accordingly. This leads to concerns over fairness with a routine of set essays (as students unfamiliar with a tutor are disadvantaged) and promotes diversity of format6. A successful newspaper article, for instance, requires different skills and writing structures than a targeted essay. An extended essay develops some skills; an argument outline develops others.

Shifting formats can bring expectations into better focus and reduce confusion. Students are slow to appreciate the changing expectations of writing assignments as they progress from introductory to intermediate and advanced courses. This is especially true when assignments over many courses use the same basic format and describe assignments using the same terms. Tutors often have difficulty describing their differing expectations for writing when it appears in different settings or different stages of the degree programme (Ivanic, et al., 2000).

Writing in some formats, especially when role-playing, can give students valuable new perspectives on course material and promote active learning. It promotes empathy, which the QAA benchmark statement encourages as part of the historian’s “quality of mind” (Fletcher, et al., 2000: 3). Some evidence suggests this develops deeper learning of course content. Framing their knowledge in new ways will encourage the student to evaluate the broader importance of their material and consider how the knowledge acquired in an academic setting links to audiences outside their institution.

Recommendations and Implementations: When diversifying writing assignments, tutors should not simply select a new format at random. Set specific aims and objectives, and then select a writing structure that is well suited to them. Miller, Imbrie, and Cox (1998), for instance, distinguish several types of essays based around objectives:

Hounsell (2000) suggests many other types. Ideally, objectives for writing assignments should be tailored not only to the course objects but also to the courses’ place within the relevant degrees and the broader programme of skills development.

Tutors will find a great deal of information available on different writing genres. This advice is useful for focusing expectations within an assignment’s aims and objectives. Explicit guidance and support are vital when new genres are introduced (Macintosh, 1974). Students also should be encouraged to consider and even research demands of different genres. Displaying models and discussing the project in briefing sessions will provide useful benchmarks. Students can be asked to peer assess samples of a new genre (Race and Brown, 1993). For short writing projects, criteria for assessment should make clear what specific skills are the particular focus of assessment. Criteria for large projects should prioritise skills within the wide range of those a student will put to use. These criteria also should make clear overall expectations regarding the balance of breadth versus depth, analysis versus synthesis, and so on.

On student writing, Henry (1994) presents an overview of project work—oriented towards projects with written submissions such as literature reviews, information searches, empirical research such as case studies, and design projects—and is especially good for helping tutors appreciate broader pedagogical scope for this sort of work. Turk and Kirkman (1989) and McMurrey (2002, also on-line) provide general guides for writing for specialty purposes, such as instructions, proposals, explanations, letters, minutes, and examinations. Notes for guidance from granting agencies are useful for proposal projects. Many of these are available on-line—e.g., (US) National Science Foundation (NSF, 2001), (UK) Arts and Humanities Research Board (AHRB, 2002), and the Royal Society (2002).

For journalistic formats, Mencher (1998) is a standard general textbook for training journalists and provides guidance for tutors constructing objectives and tutorials around such projects. Mencher (1999) focuses on news journalism; this is a classic text. Organisations such as the Poynter Institute (poynter.org) and the Writing Program at The Providence Journal (projo.com/words/) support journalism training with extensive on-line support. Many secondary school sample lesson plans for newspaper writing and story editing can be found on Internet sites and co-opted for introductory courses (keyword search “how to write a newspaper article”). Dick (1989) treats writing for magazines. Henning (2000) and Neilsen (1995-2002) treat writing for Web sites. An easily accessible source for basic elements of writing news articles is DFP (1997), which also considers other journalistic formats and offers advice on specific types of news stories.

Large writing projects are suitable for group work (Thorley and Gregory, 1994; Hunter, et al., 1996; Jaques, 2000; Nicholson and Ellis, 2000) and for combination with oral presentations or posters. They also allow a strong element of self-design by the student, thus fostering skills of decision-making, design, planning, time management, and creative problem solving (Macintosh, 1974; Clift and Imrie, 1981; Brown, et al., 1997).

Potential problems: The design of longer running projects must be undertaken with care to avoid overloading the student with demands or over-running the course calendar. Brown, Bull and Pendlebury (1997) advise on encouraging self-management and the timetabling of longer-term projects. Macintosh (1974: 107) offers advice on various design plans for project work that allow for different levels of student autonomy and staff input. Though writing projects can focus on many elements of the research, reading, reasoning, and writing processes, each assignment should focus on a specific and limited number of skills. The remainder should be left to other courses in the degree sequence. Progression throughout the degree may involve sequential development of skills (one after another), or ever more inclusive sets of skills.

Supervision of writing projects can place heavy demands on tutors. Self and peer assessment programmes reduce this burden, especially in formative stages. They also provide regular benchmarks for monitoring progress. Regular availability of tutors for consultation is important (Race and Brown, 1993), and students should be encouraged to take an active role managing their needs (Phillips and Pugh, 1994: 93–112). Students undertaking longer-term projects normally undergo a cycle of psychological states and explicit attention to these can be a part of the supervision process (Phillips and Pugh, 1994: 72–81). Regular meetings help the student avoid a sense of isolation and help the tutor monitor progress. Criteria for tutor input should be standardized, especially where several members of staff are active in advising students for a single course (Clift and Imrie, 1981).

Students tend to see shorter assignments as less demanding. This leads to both their deferment of effort and a sense that less effort is required. Frank discussions of expectations at the start and formative assessment as the project develops can keep student effort focused on learning objectives and help to promote genuine development of skills.

Final Examinations

This section considers the strengths and weaknesses of final examinations as an assessment tool. It also considers ways to adjust their standard design to broaden their overall value in assessment systems. In the next section, we consider alternatives.

The standard final examination is a previously unseen, time-constrained, invigilated exam undertaken following the completion of a course or at the end of an academic session. Normally the examination involves tasks in which the student recalls course content, demonstrates their mastery of methodologies developed in a course, or applies the syllabus to novel problems. Normally, students produce written scripts, and the tasks set are not subject to negotiation or reformulation. The final examination is one of the most common forms of assessment in history of science courses (e.g., Steffens, 1992; 2001).

Benefits: The standard final examination is a previously unseen, time-constrained, invigilated exam undertaken following the completion of a course or at the end of an academic session. Normally the examination involves tasks in which the student recalls course content, demonstrates their mastery of methodologies developed in a course, or applies the syllabus to novel problems. Normally, students produce written scripts, and the tasks set are not subject to negotiation or reformulation. The final examination is one of the most common forms of assessment in history of science courses (e.g., Steffens, 1992; 2001).

We also recommend that departments give serious consideration to requiring students to write at least some essays under exam conditions which afford safeguards against plagiarism and the use of inappropriate outside assistance. This also gives students the opportunity to develop relevant life-skills such as the ability to produce coherent, reasoned and supported arguments under pressure. (Fletcher, et al., 2000: 6)

Student entry into final examinations can be coded to allow for confidentiality of identify. This provides a sense of protection against favouritism or retribution by the examiners. Marking final examinations can be time-consuming and tedious, but the use of final examinations can reduce the overall demands on tutors for marking and support. Rust (2001) offers suggestions for streamlining this process using standardised forms.

Recommendations and Implementations: Most students will be familiar with the demands of final examinations and have at least some relevant study skills7. Working effectively under examination conditions is a skill that can be continuously developed. Ideally, tutors should help students with examination skills: providing them with opportunities to work through typical examinable tasks under simulated examination conditions, and then offering formative assessment on their performance. Students should not simply be thrown into final exams as though they were rites of passage. Advice for examination preparation is common in student guides. Tracy (2002) is comprehensive and especially useful. Race (1999; 2000), Nothredge (1990) and Rowntree (1998) set revision for exams within the wider context of study skills. Goodwin and Bishop (2001) offer on-line advice.

Constructing final examinations is no easy matter. Tutors first need to identify the course objectives an examination is meant to assess. Final examinations are good choices for some assessment purposes but poor choices for others. They offer an efficient means for testing low-level cognitive skills such as memorising, identifying, and describing (Biggs, 1999). They can be used to measure a student’s factual grasp ofcourse content or basic matters of chronology and substance. They also can test middle-level cognitive skills, such as a student’s ability to extract generalisations or to apply concepts and methodologies developed to new information. The pressure of timed examinations tests a student’s ability to process information quickly and to prioritise their ideas. These assets suggest examinations can be useful at early stages of a degree programme but less effective for assessment of higher-level cognitive skills (such as synthesis) or independent research.

Tutors should consult their colleagues and teaching assistants when drafting exams to consider four aspects of their examination scripts. Two aspects are relatively straightforward. First, clarity. Are the instructions explicit and clear? Are the questions direct and understandable? Are questions written so as to allow only one interpretation? Do questions ask for vague, open-ended work (e.g., “consider” or “discuss”) or specific observable actions (e.g., “contrast” or “defend”)? Second, realism. Are the set tasks do-able within the allotted time, within the scope of the syllabus, within what might be reasonable to expect for a course at its particular status within a degree programme?

A third aspect for consultation involves the relationship between the tasks set and the course objectives the examination is expected to assess. Tutors should be able to identify how a specific task provides a means for monitoring the stated objectives. Importantly, final examinations need not monitor all course aims and objectives by themselves; they may form only part of the overall assessment process. (This prescription can release tutors from demanding too much out of an examination paper and their students.) Tasks set in examinations without clear connections to course outcomes either should be reworked or deleted.

A final aspect for consultation involves criteria for assessment. Tutors should be able to identify their expectations for exam responses not only in terms of the narrow context of the set tasks (what constitutes an acceptable or ideal response to the task set) but also for the general context of progress towards the stated course outcomes. Model answers, rubrics and checklists are useful devices for explicitness. These should aim to provide operational definitions for various levels of mastery. Such criteria may seem tedious to construct. However, they provide helpful guidance for students seeking to prioritise their learning, and they offer a lead to students anxious to follow.

Potential problems: Final examinations receive considerable criticism (Fawthrop, 1968; Ellington, et al., 1993; Race, 1995; Bauer, 1997; Fallows and Steven, 2000; more cited in Brown, 2001). Critics argue the standard final examination promotes shallow learning (such as memorising) that tends to be forgotten quickly by students. Tutors rely on final examinations for too many purposes and prefer exams for their relative convenience rather than their educational value. Tutors relying on final examinations also tend to find students poorly engaged in their classrooms. Students can score well on examinations through specialist survival techniques rather than deep understanding of course content or fulfilment of course objectives. As summative assessment, examinations provide little feedback to students and give them little guidance regarding future learning needs or ways they might improve. Final examinations cultivate few skills valued in professional careers. Critics of final examinations focus especially on issues of validity: what precisely is being assessed under examination conditions? Race and Brown (1993) compare tutor and student expectations for examinations. Solutions to these problems can be considered in sequence.

Final examinations may promote shallow learning and short-term retention when they are presented as an unsupported assessment tool detached from the learning process of a course. In some contexts, such as foundation courses, low-level cognitive skills such as memorising are important learning outcomes. In other contexts tutors rely on these skills but emphasise others in their assessment. Lower level cognitive skills have an important role to play in higher education. The criticism seems to be focused on cases where tutors promote nothing more than low-level skills in their courses. Tutors always have the option of setting tasks on examinations that provide only small rewards for low-level skills.

Students memorise as a last resort—when they feel grossly underprepared, don’t know what to expect, or as a reaction to panic. Tutors can reduce the sense of panic by clearly presenting their expectations and criteria for assessment. This makes a point to identify the relative contribution low level cognitive skills will make to the overall assessment. Where these skills are crucial, tutors can prepare students with long-term attention to the revision process. This can include self and peer assessment of knowledge during tutorials, active learning in lectures, or tutorial support aimed towards identifying key information and providing contexts for its assimilation (such as through visual aids, pneumonic devices, and cognitive connections with other material in their lives).

The point about long-term attention to the revision process also relates to concerns about examinations shifting the centre of learning within a course. Students experience courses through their assessment and will even abandon the learning they achieve during the course if they don’t anticipate credit for it later in the course (Harvey and Knight, 1996). Assessment by final examination can imply to students that lectures and other class activities have no value other than as they relate to that exam. Hence the dreaded question—“will this be on the exam?” —and the many complaints from tutors that students have low levels of engagement during a course. Final examinations normally sit outside a course’s curriculum, little noticed until they are immanent. Tutors who use a final examination might keep the demands of the final examination ever-present during class activities and encourage students to consider their work in relation to course objectives. Some course material may provide foundation knowledge for a learning objective. Other material may provide the concepts students will be expected to analyse, or it may develop skills they will be expected to apply in another context.

Brief knowledge and skill tests can be undertaken during the course so students can assess their own degree of mastery. Such additions to course work integrate a tutor’s expectations about assessment into the student’s experience of a course day-to-day. Other tactics include: distributing past exam papers at the start of term and reviewing them periodically during the course, asking students to create examinable tasks (then discussing their value as measures of course outcomes and their plausible responses), issuing sample tasks for examination from time to time during the course, or managing study circles within a course (in which students can teach each other, thus taking active ownership of their learning).

Final examinations also normally sit outside a course’s curriculum because the feedback students receive tends to be sparse and normally lacks formative content. Return of this information also normally is long delayed. Summative assessment of final exams can be transformed easily into formative roles. Focusing on course objectives, tutors can create rubrics or checklists for examined tasks to provide feedback that is either impressionistic or detailed (Brown, et al., 1994). These can be separated from exam scripts, copied for records, and returned to students. Students wishing quick returns can provide a self-addressed stamped envelope.

Departments can focus on exam performance generally by asking students to self assess their strengths and weaknesses during revision periods. Meeting with a personal tutor, students can compare their assessment with tutor comments and create a plan for skill development. Departmental procedures for annual surveys can direct tuition regarding the examination process. Tutors at the start of a new session can ask returning students to reflect on their strengths and weaknesses with past examinations. This provides a forum for identifying the differences in expectations one year compared with another. The overall aim should be to help students avoid relying on survival techniques and shift their focus to skill development and deep learning of course objectives.

Frequent claims that final exams cultivate few skills valued in professional careers are disputed by Fletcher, et al. (2000: 6), who suggest the pressure of timed and summary performance reflects workplace demands. In some ways performance under pressure is the whole point of harsh examination conditions (though this rarely is a stated course objective!). Regardless, when final examinations are used in combination with other assessment tools, the concern about narrow value is mitigated. This concern also seems to relate to examinations that assess only low-level cognitive skills. Exams can be constructed to engage any number of key skills (in whatever sense of the term, e.g., Griffiths, et al., 1999; Fallows and Steven, 2000; Murphy, 2001). Drawing attention to key skill connections can help students appreciate the relevance of final examinations to their personal development.

Critics of final examinations focus especially on issues of fairness and validity. On fairness, Fletcher, et al. (2000: 6) emphasise the importance of invigilated examinations as a check on plagiarism and unfair assistance from others. By isolating students and setting them to work on previously unseen tasks, tutors are supposed to obtain a measure of that student alone regarding their mastery of course objectives. Thus, final examinations are said to improve fairness because they subject all students to a common measure.

Critics complain that this sense of fairness is too narrow and possibly deceptive. On one level, it assumes students have equal knowledge of what tasks might be set in the examination. Underground trading of past papers and course intelligence is common, but this can be superseded by the tutor distributing documents directly. Open discussion of revision strategy and likely examination tasks can provide a level footing for all students. On another level, some students consistently perform better under examination conditions than others. This selective process takes place even in seemingly trivial aspects of examinations (dense exams favour those who can write quickly or who have a confidence knowledge of English; commuting students have more logistic hurdles to distract them on the day of the examination than resident students; students fresh from secondary education are acclimated to examination conditions far more than students returning to higher education later in life). Tutors can design examination tasks to mitigate these factors as much as possible. On a third level, some students specialise in preparing themselves for examination conditions to the exclusion of most other types of skills or learning objectives. They might focus, for instance, on memorising and regurgitation but ignore analysis or comparison. Tutors should emphasise to students the range of skills they will monitor during examinations and set their questions accordingly. Balancing multiple tasks on an exam paper can test all students in both familiar and unfamiliar ways.

Students frequently complain their performance under examination conditions is hampered by factors such as stress and exhaustion. On stress, students can develop anxiety reducing strategies through on-line advice (e.g., Goodwin and Bishop, 2001) or guides to revision (e.g., Northedge, 1990; Race, 1999; Race, 2000; Tracy, 2002). Many student support or counselling centres at universities offer guidance for reducing examination anxiety (e.g., US CS, 2000; CPSU SAS, 2002). Tutors can help reduce examination anxiety by increasing their sense of preparation. First, students tend to develop misconceptions about the examination process and the tasks likely to be set in a particular exam paper. Tutors should make their expectations and criteria for assessment clear. Tutors should identify the objectives monitored by an exam and discuss ways students can demonstrate their proficiency in these areas. Practice under examination conditions and peer assessment of practice work helps clarify expectations.

Exhaustion manifests typically for two main reasons. One is last-minute preparation for the examination itself. Revision requires time management and planning skills, both of which can be considered as a course develops. Course work can have explicit components directed towards assisting revision.

Another reason for exhaustion relates to the demands imposed by the examination itself. Tutors should consider the mental and physical pace they require of students during examination conditions. An exam scheduled for a three-hour period should not require students to write continuously for three hours. Time must be added not only for mental work (e.g., thinking, structuring, reflecting, debating) but also for physical work (e.g., slow and legible writing, outlining). Long examinations place harsh strains on bodies. Tutors should add time for resting muscles and brains as well as for stretching. Exam settings also ignore physiological rhythms. Tutors can advise students on the effects of certain behaviours before examinations. Large infusions of caffeine, sugar, or nicotine, for instance, may seem to spark abilities but these are short term. They tend to be followed by crashes in energy levels and discomfort. In the long run, these can make the student’s work more difficult.

6. Other types of exams

The standard final examination is a previously unseen, time-constrained, invigilated exam undertaken after the course. Four alternative examination settings are frequently used in higher education:

  1. assessment under invigilated conditions of a previously unseen paper but students are allowed to consult their own notes (open book exams)
  2. assessment under invigilated conditions based on a paper previously disclosed to students (pre-published exams)
  3. assessment based on a paper distributed to students on which they may work openly and consult sources freely (take-away exams)
  4. assessment by direct questioning based on submitted work, unseen questions, or a previously published script (oral exams or viva voces; see Section 7)

Benefits: Standard final examinations are useful tools for assessing low and medium level cognitive skills, but they are poor tools for assessing high-level cognitive skills. These alternative examination formats allow a tutor to set tasks that minimise low-level skills and maximise higher-level skills. They also allow a tutor to monitor proficiency over a much wider range of key skills than is possible under the conditions of isolation required by standard final exams (Rowntree, 1987; Brown and Knight, 1994).

Open book exams reduce the reward for memorisation and increase the emphasis a tutor might place on data retrieval, comprehension, relations, application, and synthesis (Heywood, 1989). Time constraints reward preparation and information management. Access to familiar material relieves student anxieties grounded in fears of failed recall or demands for arcane knowledge (Beard and Hartley, 1984; Heywood, 1989). This freedom from rote learning allows students to dedicate their study effort towards deeper learning and analytical skills (Jackson and Jaques, 1976).

Pre-published exams release students from some of the problems imposed by invigilated examinations of fixed time length. They also provide opportunities for reflection and deeper analysis in settings students find more conducive to this type of work. They allow tutors to set tasks that are more complex and allow students to access whatever resources they think are relevant to their preparation. Students can focus their skill development to specific ends. Pre-published exams encourage group work while still assessing students based on their individual mastery of learning objectives under invigilated conditions.

Take-away exams carry most advantages of pre-published exams. In addition, they allow opportunities for additional research, fact checking, collaboration, and peer assessment. They also allow some flexibility for the students in setting their own pace when engaging the tasks to be assessed. Completion of take-away exams outside invigilated settings means students can rely on familiar writing techniques and technologies. These can improve their grammar, spelling, and writing structure. This comfort contributes to the overall sense students feel about the validity of this examination process.

Alternative examination formats can be a real boost for student appraisals of fairness and for the validity of the test process (Clift and Imrie, 1981). They also set tasks that more closely simulate those demanded in professional working environments (Beard and Hartley, 1984). Promoting these skills in a degree programme will teach students how to make quick and effective use of their resources while under time pressure.

Recommendations and Implementations: Tutors must make their expectations clear when using these alternative formats. These should include expectations regarding the specific constraints and opportunities provided in the chosen format as well as the specific procedures students will be expected to follow. Tutors also should present relevant criteria for assessment. Students can be included in the process of creating these criteria.

For open book examinations, tutors must give students guidance on what material is acceptable for use in the invigilated setting. Some universities have regulations on this matter and these must be respected. When students are allowed to bring simply anything into the examination, tutors risk considerable disruption owing to the sheer volume of material likely to appear. They also risk issues of equity, as the students who acquire key sources from libraries will have an unfair advantage over others. Only paper resources should be allowed. Tutors might set a limit on what students can bring, such as only required texts or one notebook (of fixed size) of handwritten materials. Students sometimes put a great deal of effort into exam aids when these are restricted to a fixed size and allowed only when handwritten by the student; this creates a setting for considerable active learning as they acquire, prioritise, and structure their information.

For pre-published papers, several decisions must be made to define the overall process. First, tutors must consider the length of the interval between distributing the paper and the invigilated examination itself. Papers that set complex tasks requiring background research or detailed reflection must timetable this additional work into the process. Depending on the tutor’s strategy, papers can be distributed as early as the first meeting of the course or as late as the day before the scheduled exam. Distributing a pre-published paper early might help efforts to connect the examined tasks to active learning over the term. Distributing it close to the scheduled examination might relieve anxiety and provide a short opportunity for reflection, but it assumes students have revised and can set time aside in their schedule to undertake the tasks set in the pre-published paper. This short notice might increase anxiety if some material required has not been learned and it appears “too late” to sort that out. For short intervals, tutors must consider the other commitments students might have within the available time. Working students will need time to plan open periods for concentration. Students with other examinations may carry an unfairly heavy burden.

Second, tutors must ensure the pre-published paper is available to all students at the same time following instructions in course documentation. This is especially important if the interval between pre-publication and the exam is short. Extra steps should be taken to ensure no student has grounds for complaint regarding access to the pre-published paper.

Third, with pre-published papers tutors must consider whether they allow an open book format to the actual examination. If students are allowed notes, they are likely to bring completed essays ready for copying or extensive notes ready for transferral. This has implications for the validity of the exam paper as an examination of the individual student.

Finally and most important, tutors must consider the relation between the pre-published paper and the actual paper students sit under examination conditions. Options include:

  1. providing students with the exact paper
  2. providing students with a general description of the tasks set in the paper
  3. providing students with model questions analogous to the questions set in the paper
  4. providing students with a population of exact questions from which the paper will be drawn

Each approach has its advocates and critics. Cain (2002) combines 4. and 2. Providing exact questions relieves anxiety and focuses revision. It also reveals to students the range of skills expected under examination conditions. Collaboration is assumed during revision; indeed, students frequently are found dividing the work, then actively teaching each other their speciality. Questions for the paper students actually sit for the invigilated period are selected at random from the population of possible questions. Providing more questions than will appear on the examination promotes revision across the breadth of the curriculum and learning objectives regardless of what actually appears on the exam paper. Combining this strategy with one that generally describes additional tasks set in the actual exam paper provides students with enough information to prepare for the examination as a whole but deliberately tests different sets of skills on different elements of the paper. Use of 4. preserves an opportunity to test student mastery of course outcomes as individuals. These tasks tend to involve high-level cognitive skills that draw on the broad knowledge of the syllabus gained from revision of pre-published questions or an application of medium level cognitive skills to novel material.

Miller, Imbrie, and Cox (1998: 199–201) argue against 1. and 4., proposing 2.—i.e., tutors should publish detailed descriptions of the questions and their objectives. In their example, a description might ask students to prepare as follows:

“In this section you will be given the names of ten key figures discussed in the course and for five of them you will be asked to identify their most important primary source, then summarise the content of that source. I will be looking for your ability to weigh different notions of value in your choice of ‘most important,’ and I will be looking at the depth of knowledge and overall understanding you have regarding the sources you choose to describe. A good answer will be factually correct. A great answer will present both obvious and subtle layers of meaning for the work.”

For take-away exam papers, tutors should assume collaboration occurs. They also should consider how to safeguard against plagiarism and inappropriate assistance by others. Take-away papers should have a fixed and well-advertised schedule for dissemination and return. Timing issues are like those for pre-published papers. Any task requiring student access to certain resources must only be introduced when those resources are in sufficient supply for all students to access them.

Potential Problems: These alternative formats for set examinations face the same potential problems as final examinations unless they are planned with care. Brown and Knight (1994) provide a general discussion.

Viva voce and oral examinations

Viva voce (viva) and oral examinations involve a dialogue or interview in which the student is expected to provide responses to a series of either pre-prepared questions or set topics for discussion. The tutor typically poses the questions, and then serves as examiner. Students are assessed according to criteria set before the meeting. Assessment can be either formative or summative. Written materials produced by the student can supplement these examinations, or they can form the focus of discussion. The length of interview can vary and should increase as students progress through a degree programme. Interviews should involve a permanent record of proceedings both for auditing purposes and for their formative value in debriefing.

Benefits: Vivas are a common assessment tool in some academic settings. Interviews offer a mechanism for formative assessment. Vivas frequently assist examiners when they have difficulty classifying a borderline student in terms of qualitative degree categories. They also are common in cases of suspected plagiarism or other irregularities (Brown and Knight, 1994; Carroll and Appleton, 2001), or when assessing group work where tutors are pressed to identify the extent of each student’s accomplishment. Vivas allow tutors to probe the depth and breadth of student accomplishment. Vivas can supplement written material, and thus can be used to overcome student limitations with written communication (Clift and Imrie, 1981). They also can be used to test communication skills, stress management and analytical skills. Vivas have immediacy and allow personalisation.

In viva voces, students are asked to “think on their feet” (Fry, et al., 1999). The ability to present oneself in such circumstances is a vital transferable skill for any workplace. Skill development in this area is undervalued, and the rewards far outweigh the anxiety it is likely to cause.

Recommendations and Implementations: Students will need clear guidance on the procedure for a viva voce well in advance. This includes the agenda, some information on the tasks students can expect to engage, criteria for assessment, and guidance on etiquette. Students should have guidance on what materials they should revise (e.g., essays or other project materials or prepared submissions) and what materials they might bring (e.g., notes, outlines, or supplemental texts) to the examination. Tutors should have a clear plan for viva voces and follow the set agenda. The agenda may reveal actual questions used, or it may simple indicate a procedure to be followed. Consistent adherence to an agenda increases the validity of viva voces as an assessment tool.

Tutors should prepare their basic framework of questions in advance whether or not these are disclosed to students. Attention should be paid to the relative balance between closed questions (which require clear-cut answers and leave little room for ambiguity) and open-ended questions (which allow many possible answers and answers of indefinite length). Tutors also should clearly identify the relationship between the answers students provide and the determination of marks. Are points lost for factual errors? Can questions be skipped? Is the tutor more interested in an overall sense of ability or a display of mastery for specific tasks? Guidance in these areas increases the overall validity of viva voces, especially when comparing marks provided from one examination or examiner to the next. Students have the right to expect conditions as nearly identical as is practical. Burniston (1982) tests several aspects of viva voce structure regarding validity and reliability. Brown, Hitchman, and Yeoman (1971) test the reliability of oral examinations to work in chemistry. Tutors should plan for the content of viva voces to be compromised after the first student leaves the examination and ensure the first student has as much opportunity to do well as those undergoing examination later.

Vivas must proceed in a structured fashion. Students should be given simple questions at first or asked to deliver a prepared introduction. Questions relying on student recall of factual information can be affected by nervousness. Questions applying analytical skills can make use of props such as primary material or relevant passages from familiar secondary sources. Van Ments (1989) offers practical advice for structuring examinations. Examiners themselves should remain flexible with the questioning but stay focused on the agenda (Fry, et al., 1999). This flexibility should be combined with a “friendly but detached stance” that will put students at ease while maintaining the required official context of assessment (Brown, et al., 1997). The viva voce should have clear beginning and end points and proceed without outside interruption. Attention should be paid to room conditions—seating, lighting, technical equipment, and so on—before the start. Tutors should ensure viva voces are not interrupted once begun.

Students should be encouraged to prepare for viva voces in groups. These aid student preparation. They also reduce stress and prepare students for presenting their ideas orally (Brown, et al., 1997). Preparation for this kind of assessment should also allow for practice sessions for the student (and indeed staff where the task is unfamiliar). This practice can be either by means of a mock viva or by use of videotape examples during a briefing session (Brown, et al., 1997). If students are familiar with the structure and likely content of the assessment, anxiety can be greatly reduced (Clift and Imrie, 1981).

Attention must be paid to producing permanent records of student performance. This can involve recording the examination or careful note taking (Clift and Imrie, 1981; Bradford and O’Connell, 1998). Students can be asked to self assess their presentation.

With planning, vivas provide excellent opportunities for formative feedback. The viva is a unique opportunity for the examiner to gain access to student thought processes and analytical skills. This can help identify weaknesses and provide opportunities to suggest improvements. Debriefing immediately following the viva will enhance its formative value and reduce the uncertainty about achievement (Race, 1995).

Potential problems: An oral exam can be potentially very stressful for the student (Clift and Imrie, 1981; Habeshaw, et al., 1993). The tutors carrying out the exam must plan carefully and maintain flexibility of approach to strike a fine balance between asking challenging questions and intimidating the student and between keeping the student talking and directing the discussion. Potentially intimidating tones and room arrangements must be avoided (Habeshaw, et al., 1993; Fry, et al., 1999).

Staff time commitments for vivas are determined by design decisions. Time taken for the task can be minimised if the viva voce is undertaken alongside another type of assessment, for example with posters, or short essay papers. In this manner the viva itself can remain short and serve more as a formative exercise for feedback on the written material submitted.

8. Conclusion

We realise our survey and synthesis approach only scratches the surface. We don’t aim to be exhaustive. Instead, we hope to fuel discussion of the appropriateness of our choices for assessment tools in monitoring the success of our learning objectives. We want to emphasise the strengths and weaknesses of our standard tools, and we want to introduce the range of alternatives available. Rather than re-invent wheels, we have sought to bring some of the relevant literature into this discussion. Curriculum designers need not work in isolation. A wealth of material is available about assessment tools and their appropriate application. Subsequent papers in this series will consider other assessment methods, such as posters, oral presentations, and Web evaluation and construction.

9. Acknowledgements

This project was supported with funding from the Philosophical and Religious Studies Subject Centre of the Learning and Teaching Support Network. Thanks to Graeme Gooday for support. Thanks also to UCL’s Department of Education and Professional Development for use of their resources and to the Library of the Institute of Education, University of London.

Endnotes

  1. Address correspondence to Dr Joe Cain, Department of Science and Technology Studies, University College London, Gower Street, London, WC1E 6BT, UK j.cain@ucl.ac.uk
  2. The sample of syllabi provided in Steffens (1992; 2001) shows set essays and final examinations are the main tools for course assessment in history of science. However, these compilations also show a wide range of other devices supplementing these main tools, including short critical essays, presentations, take-away examinations quizzes, and the amorphous “participation”. In her anthology of syllabi for courses focusing on women, gender and history of science, Rusnock (1999) shows less emphasis on set essays and unseen examinations and more emphasis on formative writing such as through journals.
  3. Knight (2001) provides a briefing on current concepts in assessment strategies. The impact of course design is discussed insightfully by Toohey (1999).
  4. Clift and Imrie (1981), Schneck (1988), and Brown, Race and Smith (1996) argue the nature of the assessment imposed upon students is a key factor on student choice of study technique and the depth of their learning. Students tailor their learning to maximise success in assessment while minimising study effort. Without careful alignment between course objectives and assessment demands (Brown, 2001), students abandon a tutor’s carefully planned curriculum and experience a course through the criteria set by assessment (Harvey and Knight, 1996). In short, students ‘take their cues from what is assessed rather than what lecturers assert is important’ (Brown, et al., 1997: 7). Also see Schneck (1988).
  5. Bloom, Hastings, and Madaus (1971) provide extensive treatment of various formative and summative techniques. Hyland (2000) is one of innumerable authors defending the value of formative assessment.
  6. The trio of concepts: fairness, validity (the degree of fit between the learning objectives indicated to the students and the learning achievements evaluated in an assessment), and reliability (assessment is consistent and repeatable, using the same standard across all students) are discussed in detail by Gipps (1994), Brown (2001), and Torrance (1994).
  7. Brown, Race, and Smith (1996) describe student impressions of skills being tested in typical examinations.

Bibliography

AHRB, “Advanced Research Awards in the Arts and Humanities”, (Arts and Humanities Research Board, 2002) last modified: no date; accessed: 10 April 2002, www.ahrb.ac.uk/research/

Allan, Joanna, “Learning Outcomes in Higher Education”, Studies in Higher Education, 21, 1996, 93-108.

Bauer, Henry H., “The New Generations: Students Who Don’t Study”, (Virginia Polytechnic Institution and State University, 1997) last modified: 15 November 1997; accessed: 10 April 2002, www.bus.lsu.edu/accounting/faculty/lcrumbley/study.htm

Baume, David, A Briefing on Assessment of Portfolios (York: Learning and Teaching Support Network, 2001), 20 pp.

Beard, R. and Hartley, J., Teaching and Learning in Higher Education, 4th edition (London: Paul Chapman Publishing Ltd., 1984), 333 pp.

Bell, Judith, Doing Your Research Project: A Guide for First-Time Researchers in Education and Social Science, 3rd edition (Buckingham: Open University Press, 1987), 176 pp.

Biggs, John, Teaching for Quality Learning at University (Buckingham: Open University Press, 1999), 250 pp.

Bloom, Benjamin Samuel, Hastings, John Thomas and Madaus, George F., Handbook of Formative and Summative Evaluation of Student Learning (New York: McGraw Hill, 1971), 923 pp.

Booth, Wayne, Colomb, Gregory and Williams, Joseph, The Craft of Research (Chicago: University of Chicago Press, 1995), 294 pp.

Bradford, M. and O’Connell, C., Assessment in Geography (Cheltenham, Gloucestershire: Geography Discipline Network, 1998), 38 pp.

Brown, George, Assessment: A Guide for Lecturers (York: Learning and Teaching Support Network, 2001), 24 pp.

Brown, George, Bull, Joanna and Pendlebury, Malcolm, Assessing Student Learning in Higher Education (London: Routledge, 1997), 317 pp.

Brown, P., Hitchman, P. J. and Yeoman, G. D., CSE: An Experiment in the Oral Examining of Chemistry (London: Methuen Educational, 1971), 103 pp.

Brown, Sally and Knight, Peter, Assessing Learners in Higher Education (London: Kogan Page, 1994), 317 pp.

Brown, S., Race, P. and Smith, B., 500 Tips on Assessment (London: Kogan Page, 1996).

Brown, Sally, Rust, Chris and Gibbs, Graham, Strategies for Diversifying Assessment in Higher Education (Oxford: Oxford Centre for Staff Development, 1994).

Burniston, Christabel, Creative Oral Assessment : A Handbook for Teachers and Examiners of Oral Skills (Southport: English Speaking Board (International), 1982), 203 pp.

Cain, Joe, “Plagiarism [course handout]”, (Department of Science and Technology Studies, University College London, 2000) last modified: 27 December 2000; accessed: 4 April 2002, www.ucl.ac.uk/sts/cain/116/116-plag.pdf

Cain, Joe, “Staff .. Dr Joe Cain”, (Department of Science and Technology Studies, University College London, 2002) last modified: May 2002; accessed: 28 May 2002, www.ucl.ac.uk/sts/cain/index.htm

Carroll, Jude and Appleton, Jon, Plagiarism: A Good Practice Guide (Oxford: Oxford Brookes University and Joint Information Systems Committee, 2001), 43 pp.

Chang, Hasok, “HPSCB218: History and Philosophy of the Physical Sciences Syllabus”, (Department of Science and Technology Studies, University College London, 2002) last modified: 8 January 2002; accessed: 8 March 2002, www.ucl.ac.uk/sts/admin/syllabus/

Cheng, Winnie and Warren, Martin, “Making a Difference: Using Peers to Assess Individual Students’ Contributions to a Group Project”, Teaching in Higher Education, 5, 2000, 243-255.

Clift, J. C. and Imrie, B. W., Assessing Students, Appraising Teaching (New York: John Wiley and Sons, 1981), 176 pp.

CPSU SAS, “Combating Test Panic”, (California Polytechnic State University, Student Academic Services, 2002) last modified: no date; accessed: 10 April 2002, www.sas.calpoly.edu/asc/ssl/tests.panic.tips.html

Crème, Phyllis and Lea, Mary R., Writing at University: A Guide for Students (Buckingham: Open University Press, 1997), 160 pp.

Davis, Mary Beth L., “Interactive Learning Within the Self: Writing the Research Notebook to Heighten Critical Thinking Skills”, (Department of Medieval Studies, Central European University, 1998) last modified: 29 March 1998; accessed: 4 April 2002, www.cep.org.hu/teachandlearn/szeged98/davis.htm

DFP, “Newspaper Writing 101”, (Detroit Free Press, 1997) last modified: 1997; accessed: 10 April 2002,www.freep.com/jobspage/academy/writing.htm

Dick, Jill, Writing for Magazines, 2nd edition (London: A&C Black, 1989), 198 pp.

Ellington, H., Percival, F. and Race, P., Handbook of Educational Technology (London: Kogan Page, 1993), 263 pp.

Engle, Michael, “The Seven Steps of the Research Process”, (Reference Services Division, Cornell University Library, 2001) last modified: 19 February 2001; accessed: 4 April 2002, www.library.cornell.edu/okuref/research/skill1.htm

Engle, Michael, Blumenthal, Amy and Cosgrave, Tony, “How to Prepare an Annotated Bibliography”, (Reference Services Division, Cornell University Library, 1998) last modified: 3 March 1998; accessed: 4 April 2002, URL www.library.cornell.edu/okuref/research/skill28.htm

Fairbairn, Gavin J. and Fairbairn, Susan A., Reading at University: A Guide for Students (Buckingham: Open University PressReading, Writing and Reasoning: A Guide for Students, 2nd edition (Buckingham: Open University Press, 1996), 256 pp.

Fallows, S. and Steven, C., Integrating Key Skills in Higher Education (London: Kogan Page, 2000), 251 pp.

Fawthrop, Tom, Education or Examination (London: Radical Student Alliance, 1968), 70 pp.

Fletcher, A, Arnot, M, Bates, D, Clark, C, Daunton, M, Dickinson, H, Doran, Susan, Doyle, W, Eastwood, D, Evans, E, Jones, A, Lloyd-Jones, R, McFarland, E, Porter, A, Stafford, P and Tosh, J, History (Gloucester: Quality Assurance Agency for Higher Education, 2000), 12 pp.

Fry, H, Ketteridge, S. and Marshall, S., A Handbook for Teaching and Learning in Higher Education (London: Kogan Page, 1999), 408 pp.

Gash, Sarah, Effective Literature Searching for Research. 2nd edition (Aldershot: Gower Publishing, 2000), 134 pp.

Gipps, Caroline, A Fair Test? Assessment, Achievement and Equity (Milton Keynes: Open University Press, 1994), 308 pp.

Gooday, Graeme (ed), History of Science, Technology and Medicine (HSTM). Draft Supplement to History Benchmark Statement. version 2. 30 March 2002 (unpublished, 2002).

Goodwin, Vicki and Bishop, Juliet, “Revision and Examination”, (The Open University, 2001) last modified: August 2001; accessed: 10 April 2002, www3.open.ac.uk/learners-guide/learning-skills/revision/

Gowers, Ernest, The Complete Plain Words, 3rd edition (London: Penguin Books, 1986), 288 pp.

Griffiths, T., Donelan, M. and Walker, P. J., Key Skills in Higher Education: a Paper Prepared by Education and Professional Development (London: University College London, Education and Professional Development, 1999), 36 pp.

Habeshaw, S., Gibbs, G. and Habeshaw, T., 53 Interesting Ways to Assess Your Students (Bristol: Technical and Educational Services Ltd., 1993), 191 pp.

Hampson, Liz, How’s Your Dissertation Going? (Lancaster: Unit for Innovation in Higher Education, 1994), 73 pp.

Hart, Chris, Doing a Literature Review (London: Sage Publishers, 1998), 240 pp.

Harvey, L. and Knight, P., Transforming Higher Education (Buckingham: The Society for Research into Higher Education & Open University Press., 1996), 203 pp.

Hennig, Kathy, “The Seven Qualities of Highly Successful Web Writing”, (ClickZ.com, 2000) last modified: 12 December 2000; accessed: 10 April 2002, www.clickz.com/design/onl edit/article.php/833861

Henry, Jane, Teaching Through Projects (London: Kogan Page, 1994), 160 pp.

Heywood, John, Assessment in Higher Education, 2nd edition (Chichester: John Wiley and Sons, 1989), 416 pp.

Hounsell, Dai, “Reappraising and Recasting the History Essay”, in Booth, Alan and Hyland, Paul (eds), The Practice of University History Teaching (Manchester: Manchester University Press, 2000), pp. 181-193.

Hunter, Dale, Bailey, Anne and Taylor, Bill, The Facilitation of Groups (Aldershot, Hampshire: Gower, 1996), 212 pp.

Hyland, Paul, “Learning from Feedback on Assessment”, in Booth, Alan and Hyland, Paul (eds), The Practice of University History Teaching (Manchester: Manchester University Press, 2000), pp. 233-247.

Ivanic, Roz, Clark, Romy and Rimmershaw, Rachel, “What am I supposed to make of this? the messages conveyed to students by tutors’ written comments”, in Lea, Mary R. and Stierer, Barry (eds), Student Writing in Higher Education (Buckingham: Open University Press, 2000), pp. 47-65.

Jackson, D. and Jaques, D. (eds), Improving Teaching in Higher Education (London: University Teaching Methods Unit, 1976), 154 pp.

Jaques, David, Learning in Groups: A Handbook for Improving Groupwork, 3rd edition (London: Kogan Page, 2000), 310 pp.

Kies, Daniel, “Writing a Working Bibliography”, (Department of English, College of DuPage, 2002) last modified: 22 February 2002; accessed: 4 April 2002, papyr.com/hypertextbooks/engl 103/write2.htm

Knight, Peter, A Briefing on Key Concepts: Formative and Summative, Criterion and Norm-Referenced Assessment (York: Learning and Teaching Support Network, 2001), 28 pp.

Knight, Peter and Edwards, A. (eds), Assessing Competence in Higher Education (London: Kogan Page, 1995), 189 pp.

Kuhn, Deanna, Weinstock, Michael and Flaton, Robin, “Historical Reasoning as Theory-Evidence Coordination”, in Carretero, Mario and Ross, James F. (eds), Cognitive and Instructional Processes in History and Social Sciences (Hillsdale, NJ: Lawrence Erlbaum Associates, 1994), pp. 377-401.

Lea, Mary R. and Stierer, Barry (eds), Student Writing in Higher Education (Buckingham: Open University Press, 2000), 205 pp.

Lea, Mary R. and Street, Brian V., “Student Writing and Staff Feedback in Higher Education: An Academic Literacies Approach”, in Lea, Mary R. and Stierer, Barry (eds), Student Writing in Higher Education (Buckingham: Open University Press, 2000), pp. 32-46.

Macintosh, H. G. (ed), Techniques and Problems of Assessment (London: Edward Arnold, 1974), 285 pp.

McMurrey, David, Power Tools for Technical Communication (Fort Worth, TX: Harcourt College Publishers, 2002), 459 pp.

Mencher, Melvin, Basic Media Writing, 6th edition (New York: McGraw Hill, 1998), 528 pp.

Mencher, Melvin, News Reporting and Writing, 8th edition (New York: McGraw Hill, 1999), 816 pp.

Miller, Allen H., Imrie, Bradford W. and Cox, Kevin, Student Assessment in Higher Education: A Handbook for Assessing Performance (London: Kogan Page, 1998), 288 pp.

Murphy, Roger, A Briefing on Key Skills in Higher Education (York: Learning and Teaching Support Network, 2001), 20 pp.

Mutch, Alistair and Brown, George, Assessment: A Guide for Heads of Department (York: Learning and Teaching Support Network, 2001), 21 pp.

Nicholson, Tony and Ellis, Graham, “Assessing Group Work to Develop Collaborative Learning”, in Booth, Alan and Hyland, Paul (eds), The Practice of University History Teaching (Manchester: Manchester University Press, 2000), pp. 208-219.

Nielsen, Jakob, “Writing for the Web”, (Sun Microsystems, 1995-2002) last modified: no date; accessed: 15 April 2002, www.sun.com/980713/webwriting/

Northedge, Andrew, The Good Study Guide (Milton Keynes: Open University Press, 1990), 248 pp.

NSF, Grant Proposal Guide (Washington, DC: Government Printing Office for National Science Foundation, 2001), 49 pp.

Phillips, Estelle and Pugh, Derek S., How to Get a PhD: A Handbook for Students and Their Supervisors, 2nd edition (Buckingham: Open University Press, 1994), 203 pp.

Pirie, David B., How to Write Critical Essays: A Guide for Students of Literature (London: Routledge, 1985), 139 pp.

Race, Phil, “The Art of Assessing”, New Academic, 4 (3), 1995.

Race, Phil, How to Get a Good Degree (Buckingham: Open University Press, 1999), 272 pp.

Race, Phil, How to Win as a Final-year Student (Buckingham: Open University Press, 2000), 192 pp.

Race, Phil, A Briefing on Self, Peer and Group Assessment (York: Learning and Teaching Support Network, 2001), 24 pp.

Race, Phil and Brown, S., 500 Tips For Tutors (London: Kogan Page, 1993), 129 pp.

Rael, Patrick, “Reading, Writing, and Researching for History”, (History Department, Bowdoin College, 2000) last modified: August 2000; accessed: 4 April 2002, academic.bowdoin.edu/WritingGuides/

Rowntree, Derek, Assessing Students: How Shall We Know Them?, revised edition (London: Kogan Page, 1987), 273 pp.

Rowntree, Derek, Learn how to study, 4th edition (New York: Time Warner Paperbacks, 1998), 243 pp.

Royal Society, “Funding for research scientists”, (The Royal Society, 2002) last modified: no date; accessed: 10 April 2002, www.royalsoc.ac.uk/funding/

Rusnock, Andrea (ed), Women, Gender, and the History of Science Syllabus Sampler (Seattle, WA: History of Science Society, 1999), 150 pp.

Rust, Chris, A Briefing on Assessment of Large Groups (York: Learning and Teaching Support Network, 2001), 25 pp.

Schneck, R, Learning Strategies and Teaching Styles (London: Plenum, 1988).

Sexty, Suzanne, “How to Write Annotated Bibliographies”, (Queen Elizabeth II Library, Memorial University of Newfoundland, 1999) last modified: 1 October 1999; accessed: 4 April 2002, www.mun.ca/library/research help/qeii/annotated bibl.html

Smith, Clarissa and Smith, Marty Byers, “Historical Research Online”, (Clarissa Smith and Marty Byers Smith, 2002) last modified: unknown; accessed: 4 April 2002, members.aol.com/historyresearch/

Stefani, Lorraine and Carroll, Jude, A Briefing on Plagiarism (York: Learning and Teaching Support Network, 2001), 12 pp.

Steffens, Henry (ed), History of Science Syllabus Sampler I (Seattle, WA: History of Science Society, 1992), 249 pp.

Steffens, Henry (ed), History of Science Syllabus Sampler II (Seattle, WA: History of Science Society, 2001), 203 pp.

Strunk, William, Jr. and White, E. B., The Elements of Style, 3rd edition (New York: Macmillan, 1979), 92 pp.

Thomson, Anne, Critical Reasoning: A Practical Introduction (London: Routledge, 1996), 177 pp.

Thorley, Lin and Gregory, Roy (eds), Using Group-Based Learning in Higher Education (London: Kogan Page, 1994), 194 pp.

Toohey, Susan, Designing Courses for Higher Education (Buckingham: Open University Press, 1999), 224 pp.

Torrance, Harry, Evaluating Authentic Assessment: Problems and Possibilities in New Approaches to Assessment (Milton Keynes: Open University Press, 1994), 192 pp.

Tracy, E., Student’s Guide to Exam Success (Buckingham: Open University Press, 2002), 192 pp.

Turk, Christopher and Kirkman, John, Effective Writing: Improving Scientific, Technical and Business Communication, 2nd edition (London: E & FN Spon, 1989), 277 pp.

UFl OTL, “Good Record Keeping: Procedures for Academic Laboratory Settings”, (Office of Technology Licensing, University of Florida, 2001) last modified: 30 March 2001; accessed: 4 April 2002, rgp.ufl.edu/otl/goodrecords.html

US CS, “Dealing with Examination Anxieties”, (University of Sheffield Counselling Service, 2000) last modified: 27 June 2000; accessed: 10 April 2002,

Van Ments, Morry, The Effective Use of Role Play, 2nd edition (London: Kogan Page, 1989), 186 pp.

Voss, James F., Carretero, Mario, Kennet, Joel and Ney Silfies, Laurie, “The Collapse of the Soviet Union: A Case Study in Causal Reasoning”, in Carretero, Mario and Ross, James F. (eds), Cognitive and Instructional Processes in History and Social Sciences (Hillsdale, NJ: Lawrence Erlbaum Associates, 1994), pp. 403-429.

Wilson, Di, “Plagiarism”, (Presbyterian Ladies’ College, 2000) last modified: 27 April 2000; accessed: 4 April 2002,www.plc.vic.edu.au/Library/plagiarism/plag.htm.


Return to vol. 2 no. 1 index page


This page was originally on the website of The Subject Centre for Philosophical and Religious Studies. It was transfered here following the closure of the Subject Centre at the end of 2011.

 

-
The British Association for the Study of Religions
The Religious Studies Project