Skip to main content

Trinity College Dublin, The University of Dublin

Trinity Menu Trinity Search



Online and Open-Book Assessment

Assessment has a key role in Trinity’s educational mission. It both fosters and certifies student learning and provides an indicator of the quality of the learning taking place at TrinityAssessment also drives learning: students are typically motivated to orient their learning towards what they think will be assessed. This suite of resources outlines key principles to consider when designing and conducting assessments that uphold academic integrity in an open-book or hybrid context

Read More

Guidelines on the use of open-book assessments

1.Identify whether students understand what and why you’re assessing in this context, e.g. clarify that the nature of the assessment task is both outcomes and process -based. Open-book and closed-book assessment tasks typically ask students to showcase and demonstrate their learning in different ways. Questions suitable for one mode of assessment are not always appropriate for the other.

2.Consider whether the assessment is appropriate for a remote context or if it needs adapting. All open-book assessment tasks should align with module/programme learning outcomes. Consider designing an ‘inclusive’ assessment, e.g. avoiding any need to retrofit ‘special accommodations’ later. [discussed in Fig. 1.]

3.Consider the context in which an assessment is to take place. All open-book assessment tasks should uphold academic standards. Consider how best to foster a climate of self-regulation, e.g. through the adoption of an honour code and making expectations explicit. [discussed in Fig. 2.]

4.Open-book assessment tasks should not exceed the workload required of a student in a closed-book context. Consider setting (firm) limits on the time students should spend of an assessment. If you choose to use oral exams and ‘Part B’ questions (e.g. justify your answer/evidence your calculation) as part of your assessment strategy to test mastery of content and thought ownership, the timeto complete these should also be included in your workload calculations (see: ‘Workload and Assessment Considerations’). Consider setting (firm) limits on the time students should spend of an assessment.

5.Any/all modifications to assessment strategy (e.g. a move towards open-book) should be communicated with staff and students: ensuring academic integrity in open-book assessment contexts requires students to understand the instructions provided for the task as well as the process underpinning the task itself. A move to an open-book assessment strategy might also be discussed with external examiners and the nature of open-book assessment tasks reflected in rubrics and task guidelines.

Addressing Assessment in an Open-Book Context

Context Inequity

Traditional exam conditions provide a shared experience and context for those sitting exams. Invigilation (proctoring) promotes a similar ‘shared experience and context’ and mitigates strongly against the risk of cheating. The limited duration of an exam is intended to encourage students to be selective with what they reproduce on their exam paper. Accommodating Covid-19 restrictions precludes this equity of experience of time, duration, and physical environment. Students do not universally have access to stable internet connections; nor are they guaranteed access to quiet undisturbed workspace. Replicating f2f exam conditions in a hybrid learning environment is extremely challenging for these reasons and more.

Digital proctoring can be beneficial in certain circumstances but does not offer an automatic solution to ‘recreate’ the context equity of f2f exams: in addition to students’ legitimate concerns about invasion of privacy or staff concerns about the cost of proctoring software, there are potential equity of access issues around connectivity, and significant potential for GDPR-related data breaches in terms of storage and transfer of sensitive personal data. 

Changing the nature of an assessment itself – e.g. preparing and conducting assessments suitable for use in an open-book context – is likely to have a greater positive effect on learning than trying to replicate closed-book contexts outside of a traditional assessment approach.

Fig 1 Addressing Assessment in an Open-Book Context (Word Doc 20 KB)

Moving from Closed-book to Open-Book may….

Potential for Open-Book to Enhance Teaching, Learning, and Assessment

  • Require modification of broader assessment strategy, e.g. re-thinking (expanding) the role of ‘continuous assessment’.

 

N.b. Continuous assessment = discrete summative assessment tasks taking place across the term such as essays or lab reports, as opposed to traditional end-of-module exams. 

  • Open-book assessment tasks are best used to probe conceptual or applied knowledge, or test students’ capacity to manipulate knowledge.
  • Can improve depth of learning rather than shallow engagement, e.g. less ‘binge-learning’ before a closed-book exam.
  • Can enhance engagement with disciplinary content (capitalising on role of assessment as driving learning).
  • Can support redistribution of assessment load more evenly across the module/programme, e.g. through reduction in prevalence of high-stakes final exams (and recognizes iterative nature of student learning across a module/programme).
  • Place additional workload on staff as they re-design or re-purpose assessment tasks for open-book use.

 

  • Can reduce ‘bottlenecks’ in volume of exam assessment to correct.
  • Can reduce need for ‘special accommodations’ in a diverse group of learners.
  • Can reduce invigilation resourcing needs (e.g. staff time, printing costs).

 

  • Make it challenging, if not impossible, to test ‘recall’ effectively.
  1. Un-proctored remote assessment is likely to be open-book - whether access to supporting resources is authorised or not.
  2. Proctored remote assessment is not an automatic solution to the need to test recall effectively (data storage risks/privacy concerns/cost of proctoring/infrastructure). 
  •  ‘Live’ short oral exams may be an appropriate way to test ‘recall’-style tasks, where these are essential.
  • Be challenging to familiarise students with the concepts or practices of an open-book assessment.
  • Can enhance informational retrieval skills (e.g. students still need to be able to locate appropriate notes/reference materials to support them with an assessment in a time window).
  • Can reduce exam stress (e.g. less pressure on ‘one-shot’ assessment where students know they can support their work with notes).
  • Jeopardise academic integrity
  1. Not all assessment tasks are suitable for open-book use, particularly those focused predominantly on factual recall. Solutions to recall-style tasks can be easily located online by students, even within a strict time limit.
  2. Context inequity is more of a risk than with in-person proctored exams (e.g. addressing connectivity).
  3. Potential risk of undesirable collaboration/collusion between students, especially e.g. where a time window is extended.

 

  • Can facilitate greater academic integrity e.g. where steps are taken to minimise ‘recall’-type tasks (See ‘Repurposing Exam Questions’ resource).
  • Tasks (re)designed for a hybrid context can facilitate greater assessment integrity. This acknowledges that in a remote context, all assessments carry some risk of open-book behaviours.
  • Can take context inequities into account.
  • Can be supported through the integration of digital technologies such as similarity report checkers such as TurnItIn.
  • Can have a transformative effect on learning (e.g. can enable shift in student mindset from knowledge acquisition to knowledge production).
  • Can enhance ‘authenticity’ of assessment experience (see ‘Enhancing ‘Authenticity’ in Assessment’).

Enhancing 'Authenticity' in Assessment

There is always value in asking students to structure their thoughts when they demonstrate their learning,. Classroom formative assessment techniques and summative exams and essays are traditionally how we expect students to demonstrate their capacity to do this . At the same time, many high-stakes assessment tasks (e.g. closed-book handwritten exam essays or calculations) often used at universities have little relevance for the kind of skills and activities that a graduate will engage in after graduation . Being able to ‘do’ a subject in an exam context as a once-off proof of memorisation does not reflect being able to ‘do’ a subject in practice. There are relatively few professional fields, for example where students are required to complete a task without access to supporting resources – health professionals might check dosage titrations of medication; an academic or researcher might consult scholarly journals and publications to situate their research in a field; a lawyer might review previous cases and rulings; a policy advisor might be asked to summarise several longer documents into a 300 word précis document.
In an ‘authentic’ assessment, the task typically asks students to use and apply their learning and skills e.g. as a role-play or scenario, as a completion of a real-world task, or as an assessment in a work-place setting. Learning is assessed by seeing what students can ‘do’ with their knowledge, rather than just ‘showing’ their knowledge – and there is an additional  challenge in a remote environment in ensuring that a student is ‘using’ their own knowledge to complete a task. Authentic assessments have great potential for engaging students in their learning and assessments and are particularly appropriate for use in an open-book environment as there is typically no ‘one’ right answer or solution to a task. Many colleagues at Trinity are already using ‘authentic’ assessments as part of a broader assessment strategy.
Authentic assessments tend to be more unpredictable (less ‘gameable’ by students); demand higher-order thinking of students; and are completed over days, weeks, and months rather than in minutes or hours. They require students to invest a significant amount of effort in an assessment to ‘perform’ well. This table, drawing on Wiggins’ prescient (1998) work, outlines differentiators between traditional and authentic assessments:


Fig. 2. Authentic Assessments (Word Doc 15KB)

Typical Task

Authentic task

Indicators of authenticity

Requires one or more correct responses

Requires a high-quality product or performance, and a justification of the solutions to problems encountered.

Correctness is not the only criterion; students must be able to justify their answers.

Must be unknown to the student in advance to be valid

Should be known in advance to students as much as possible.

The tasks and standards for judgment should be known or predictable.

Are disconnected from real-world contexts and constraints

Aligned to real-world contexts and constraints; requires the student to “do” the subject.

The context and constraints of the task are like those encountered by practitioners in the discipline.

Contain items that isolate particular skills or facts

Integrates challenges in which a range of skills and knowledge must be used in coordination

The task is multifaceted and complex, even if there is a right answer.

Include easily scored items

Involves complex tasks for which there may be no single answer.

The validity of the assessment is not sacrificed in favour of reliable scoring.

Are “one shot”; students get one chance to show their learning

Iterative; contains recurring tasks.

Students may use particular knowledge or skills in several different ways or contexts.

Provide a score

Provides usable diagnostic information about students’ skills and knowledge.

The assessment is designed to improve future performance, and students are important “consumers” of such information.

Adapted from https://citl.indiana.edu/teaching-resources/assessing-student-learning/authentic-assessment/index.html

 

Fig.3. Enhancing Assessment Integrity in Open-book Assessment (Word Doc 17KB)

Considerations and action points

Implication for Learning  

Success strategies for use in closed-book environments do not automatically work in open-book assessment.

An extension of the challenge students face in transitioning to university learning, e.g. moving away from the rote learning of model answers. Encourage students to create ‘revision documents’/content maps that they can use to support themselves in open-book contexts (e.g. capitalise on the role of assessment in driving student learning).

Think about the nature of the task.

Open-book assessments should require students to manipulate and apply knowledge, not just locate or summarise/rewrite it.

Consider revising the nature of module assessment.

Substitute a small number of high-stakes assessments (e.g.  3 essays in 1 three-hour exam) for lower-stakes regular tasks (e.g. 3 shorter essays + oral exam to test ownership, or equivalent).

In an online environment, a greater number of lower-stakes assessment tasks is likely to reduce dropout and encourage engagement.

Consider revising the ‘boundaries’ set on a task, e.g. around word-limits, time-limits, access to resources.

Encourage students to be discerning and selective about what they include in a response.

Students should be aware of the reasoning behind the decision to limit their suggested resource list.

Set boundaries around time available to students to complete a task, e.g. making your expectations of ‘task duration’ explicit, is key to ensuring that academic integrity is ensured. This approach could be promoted through e.g. the use of an honour code or through automation/ adaptive release through the VLE.

Question phrasing matters in open-book environments.

Adding the words ‘critically evaluate’ to a task does not automatically make it suitable for an open-book context.

Without additional limitations, students can resort to ‘Google’ or another search engine to contextualise a task without interrogating it closely.

Remove/ minimise ‘google-ability’

If students can find an answer to an assessment task using ‘Google’ or another search engine, it is likely unsuitable for open-book delivery.

Consider modifying the assessment task, requiring students to submit ‘Part B’ (e.g. justify your answer/evidence calculations) to test thought ownership.

Increasing surveillance does not mean increasing student success. Where assessment and teaching take place remotely, it is harder to control student actions completely.

Consider carefully whether the use of online proctoring software is appropriate for your context. Online proctoring/invigilation technologies can be used to ‘take control’ of a student’s computer during an allotted period of time. In theory this ‘proves’ that students do not have access to resources or assistance outside of the device on which they are taking the exam. However, the use of these technologies raises significant privacy issues. Proctoring tools also do not guarantee that students cannot access materials, resources, or access the open internet on alternative devices.

 

Open-Book Assessment:Practice Implications

The tasks outlined below provide some context and information around a range of potential assessment tasks for use in an open-book context. Some of the tasks suggested  are already likely in use in an open-book context; some of these tasks may work better in a hybrid context, e.g. in a combined approach where some assessment tasks are open-book and some are closed-book. Where hybridity in assessment is suggested, a brief explanation accompanies the task, which is marked with an asterisk (*).
Please note that this table of potential assessment tasks is neither exhaustive nor prescriptive, nor does it represent formal Trinity policy on assessment. Where appropriate, the use of the ‘Turnitin’ tool to generate similarity reports can act to highlight potential cases of plagiarism. As a general rule, open-book assessment tasks should avoid factual recall, encouraging students to apply their knowledge to a new context.
At all times assessment should be aligned with the learning outcomes specific to a module or programme. These points may guide you as you consider how to implement open-book assessment tasks:

  • What are the learning outcomes for the module/programme?
  • What is the goal of the assessment and how does it enable students to demonstrate how they may be deemed to have achieved this learning outcome?
  • Has clarity been provided to students as to what and how this assessment enables them to demonstrate how they might be deemed to have achieved the learning outcomes?
  • Is the assessment reasonable/feasible in a hybrid learning and assessment context?
  • Thinking about how you phrase a task matters in open-book environments. Adding the words ‘critically evaluate’ to a task does not automatically make it suitable for an open-book context.

Fig. 4. Practice Implications for Assessment in Hybrid Contexts (Word Doc 27KB)

Assessment Task

Description

Considerations for Use

Written Article (e.g. in style of journal, patient notes, lab report).

Comparable to a continuous assessment essay; students might prepare a literature review/engage with research to inform a response to a prompt question (e.g. ‘Special Issue in XXX’).

Consider asking students to undertake ‘peer review’ of 1st article produced; date-stamped with comments as formative assessment/proof of date of first engagement. – students submit 1st formative piece, their final submission, and a 1-pager justifying how they’ve acted (or not) on comments provided by peer.

Article format can be easily engaged with to create ‘track’ of student engagement with content/assessment, without creating additional work for academic (using track changes to mark version control).

Consider limiting ‘range’ of sources students can engage with (e.g. no more than 15 sources, published no earlier than 2015).

Consider placing tight word limit in line with journal submission (testing student capacity to select appropriate information).

Consider authenticity of task:  allows for focus on e.g. referencing/need for accuracy in citation.

Can be expanded as multi-stage assessment, e.g. peer review can be enacted, attention to referencing detail can be highlighted, brevity in quality of writing sought through ‘editorial’ peer review.

(Annotated) Bibliography

Students develop capacity to interrogate appropriacy and value of particular sources/references.

Students might be asked to produce a list of 10 key references having reviewed a reading list of 30-40 sources and provide comment on their use.

Can be multimedia-oriented or traditionally focused (e.g. developing research skills with databases as well as with Google Scholar).

Scalable assessment practice.

Students create their own annotated bibliographies, share with each other, and refine an AB that could be used as overall limited source list for an open-book assessment task later in the year.

Students see authenticity value in its use later in the semester/year.

Blog

Can provide students with an opportunity to write regularly and organise their thoughts/resources – particularly useful in online context where you cannot ‘see’ if students are developing in their capacity to self-regulate with study skills.

Regular deadlines can facilitate regular engagement, as opposed to boom-and-bust study practices.

Blogs are individualised and well suited to reflective writing. For collaborative writing work, a ‘wiki’ or a collaborative assignment (e.g. an annotated bibliography) is likely more appropriate.

Consider being explicit about length of posts, e.g. 100-200 words: blogs can be used either formatively or summatively.

Consider ‘venue’ of the blog: keeping within the VLE makes it easy to take a quick glance at quantitative data, e.g. how many blog posts an individual student has made in a module.

Consider ‘venue’ of the blog: keeping it within the VLE facilitates sensitive data being stored in a GDPR conscious way.

‘Design a solution to [x] problem’.

Situating and analysing ‘wicked’ problems requires students to engage with a task on multiple levels, likely unearthing different challenges.

Challenging to automate marking: likely most manageable in group project and small team work. Works well in multi-disciplinary and cross-disciplinary groups.

Essay

Essay style tasks previously used for continuous assessment are likely easily transferred for use as open-book assessment tasks, but some additional tweaks may be required to promote academic integrity.

Compare/contrast, limited range questions, and scenario-based questions all strongly limit the chance to locate answers that can be directly copy/pasted by students from search engines (e.g. Google).

Consider resourcing implications, e.g. where students may be unable to access the library during a period of closure.

Where you might ask a student to write three essays in three hours in a closed-book exam, consider asking them to produce two essays throughout the term instead to encourage ‘spaced-out’ workloads.

Consider how you generally place limits on what you expect students to produce, e.g. word limits, references, expectations of sophistication in content or argumentation.

Consider using TurnItIn within the VLE to generate similarity reports.

Spacing work for students is likely to mitigate against the ‘last-minute’ dash to put an essay together the night before the deadline.

Exam*

Exam questions can be adapted/modified for open-book use. 

Consider the implications of context inequity where exams are being undertaken remotely, e.g. duration of exam, location of student, technology inequity.

*Closed-book exams, e.g. using MCQs, risk greater challenges to academic integrity in an online context. Undertaking remote proctoring – e.g. invigilating remotely – is not a preferred approach unless academic integrity is at significant risk.

*Where you might ask a student in a closed-book context to write three essays/solve three proofs etc in three hours, consider asking them to produce the essays throughout the term instead to encourage more even engagement with workload across the module/programme.

Consider using adaptive release functions within VLE where exams feature some timed activity (e.g. MCQs + essay questions).

Consider requesting artefacts of ownership, e.g. visual images of ‘handwritten working out’/ ‘rough work’.

 

See resource: Repurposing F2f Exam Questions for Open-Book Exams.

Consider the implications of ‘spacing out’ assessment practices on workload for the programme team: who is responsible for ‘which’ set of essays and grading?

Multiple-Choice Questions (MCQs)*

MCQs are frequently used to assess recall and memorization of facts and can be easily ‘gamed’ in open-book contexts unless care is taken. In a fully remote context, their use is best in a ‘hybrid’ context with a parallel task (e.g. combined with a viva or requiring the submission of a Part B question) to test concept mastery.

In a remote context, their use is likely best automated: e.g. multiple banks of questions available and randomised through the VLE to reduce the likelihood of undesirable sharing of answers between students.

The adaptive release function for sequencing access to activities within the VLE might be used to limit the amount of time a student can spend on each run of questions, e.g. 40 minutes once the session has begun.

Consider the use of ‘best fit’ expansions to MCQ questions: e.g. Which of these solutions is most appropriate and why – either short answers (in words) or evidence of calculation (can be used to limit googleability).

Consider using assertion/reason questions to construct an MCQ set.

Consider asking students to provide reasons for/against each answer in a set to encourage engagement (and then requesting that the ‘process artefact’ created is uploaded after the session.

Consider having students create MCQs as part of presentation or portfolio work.

Consider whether all students need to take the same questions or whether a randomised set of questions from a common bank of questions might be used to minimise the chance of collaboration or collusion.

Objective Structured Clinical Examinations (OSCE)*

OSCEs are understood as an ‘authentic’ assessment task, assessing a student’s clinical performance and their demonstration of clinical competencies/communication skills. With the use of standardised patients, students are introduced to different scenarios through a series of patient stations.

They are ‘standardised’ to replicate the same challenges for each student. Examiners typically have standardised checklists against which to interpret the student’s performance.

Assessment of some practical skills may need to be prioritised for in-person demonstration.

Conducting remote OSCEs in some areas of specialism may be challenging and as such may be an assessment mode best prioritised for in-person use.

Remote OSCEs might be used to assess e.g. a student’s communication skills, to look at ‘bedside manner’, and/or an immediate response to a scenario posed.

Performance

If assessment is of a performance, how might this be best assessed remotely? Are multiple cameras required of a student to situate them in 3D virtual reality? Is audio quality adequate to assess student voice projection? Can the skills/skills competencies being assessed be assessed in a different way?

Assessment of practical skills may need to be prioritised for in-person achievement.

Poster

‘Research-style’ poster presentations can be a very effective way to encourage students to prepare resources for quick reference during an open-book exam or assessment. 

 

Consider placing limits on software suite that students can use to create posters, e.g. limiting production to within PowerPoint.

Consider testing ownership with a short viva (e.g. a high-level ‘pitch’) or asking for a highly limited reflective statement/essay accompanying the submission of the resource.

Portfolio

Students might be asked to collate previous assessment tasks and comment/mark up the files to highlight development in their learning across a module.

Potentially resource and time-intensive for the academic to mark/re-review/re-mark.

Most likely suitable for small classes.

Students need to be aware that they are responsible for signposting/directing assessors to evidence of learning outcomes achieved across the module/programme in their portfolio.

Practice-based assessment (e.g. of a practicum/professional placement/clinical attachment).

Competency-based assessment is often already hybrid in nature, e.g. made of multiple smaller assessments contributing to a p/f determination of a student’s readiness for practice. 

Assessment of some practical skills may need to be prioritised for in-person demonstration.

May need to be modified in light of newly ‘remote’ nature of assessment.

Practical exam (e.g. in Creative Arts, Nursing, Education etc).*

It is extremely challenging to remotely undertake a direct observation of students’ actual capacity to perform a practical task is required, where a written test cannot be used.

Scenario based assessment and simulations can be used, e.g. as they are in medical environments, but these are expensive/challenging to arrange.

Can any of the practical assessments be supplemented/redesigned to include other tasks?

Is the practical element of the assessment reassessed elsewhere in the programme?

 

Assessment of practical skills needs to be prioritised for in-person demonstration.

Presentation

Presentations can be used both summatively and formatively both in-person and remotely.

Students without good connectivity may struggle with a live presentation – consider whether students can submit a PowerPoint, a script, or a pre-recorded video and still demonstrate how they have achieved the learning outcome to be assessed.

Where students are presenting in synchronous contexts, consider using tools like Collaborate Ultra to record their presentation for content feedback afterwards.

Where students are presenting asynchronously, consider using e.g. the video upload feature of the VLE to manage uploads.

Quantitative reasoning exams

Consider asking students to submit proofs/demonstration of working out through the VLE when they submit assessment tasks. The ‘journal’ and ‘blog’ functions in the VLE are effective for their capacity to ‘date stamp’ submissions.

To demonstrate that students aren’t just putting questions into a web calculator, consider requesting proof/evidence of working-out (e.g. providing proof of process of calculation rather than just the end-product answer).

Reflective Journal

Journaling, like blogging, is a good way to get students to write regularly.

They are particularly useful where reflection/reflective practice on development is privileged. Automating the submission process limits the chance of 25 reflective journals being written the night before the deadline, e.g. can be used as a development.

Individual journals can be used formatively to inform a student’s summative submission of a reflective essay/article – facilitating engagement across the semester/year, limiting boom/bust learning cycles.

Consider being explicit about minimum – and maximum- length of entries, e.g. 100-200 words

Consider ‘venue’ of the journal: keeping within the VLE makes it easy to take a quick glance at quantitative data, e.g. how many posts an individual student has made in a module.

Consider ‘venue’ of the journal: keeping it within the VLE facilitates sensitive data being stored in a GDPR conscious way.

Short Answer Question (SAQ) and Modified Essay Questions.*

Multi-stage SAQs may need to be reviewed to ensure they are not ‘googleable’ in individual chunks. They can work well to supplement MEQs to test authorship/mastery of content.

Can different stages of MEQs be integrated together to require students to demonstrate and synthesise their knowledge of an area, rather than being separated in a step-by-step multistage question?    

This style of assessment is readily adapted to enhance sense of authenticity of assessment task, e.g. Use the SAQ prompt ‘write a 200-300 summary treatment plan of [x]’ to build on the assessment of learning demonstrated through MEQs.

Video Submission

Consider asking students to submit assessments using other media.

Does the assessment need to take the form of an essay or text-based submission for a student to be able to demonstrate how they have achieved a particular learning outcome?

Considering being explicit as to which learning outcomes might be assessed using a video clip. Might a student, for example, be asked to ‘talk to their phone’ advising a patient or advocating to a politician.

Presentations could be also recorded and submitted as video clips, e.g. as straight-to-camera takes on a simple mobile phone camera.

Consider automating the submission process through the VLE: this lets you track submission dates, details, and see at a glance who has submitted files and not, without ‘maxing out’ your email inbox.

Consider testing ownership with a short follow-up viva (e.g. hybrid assessment). This does not have to be of all students – a ‘sample’ is enough to discourage collusion or collaboration.

Consider testing ownership by requiring an additional reflective statement/essay accompanying the submission of the video – while bearing in mind the resourcing implications for the programme team of creating further assessment load.

Viva* (oral exam)

Consider either replacing or supplementing a high-stakes assessment task (e.g. essay) with a viva or else using randomized vivas to test authorship of a number of students’ submissions.

If conducting a viva remotely, ensure all participants are comfortable with the technological tool being used to ‘stage’ the viva. What is/are the contingency plans if the technology drops?

Consider that some students may be uncomfortable with the prospect of an oral exam: are students/examiners clear on the process and reach of the viva?

Have students had the chance to ‘practise’ the assessment type in a low-stakes environment? Can they re-sit the task if they have misunderstood the nature of a task?

Highly useful to promote integrity as part of a hybrid assessment approach.

Note that a short viva to test ownership may or may not be open book!