Key Considerations

This section covers other considerations that may be applicable when setting assessments for your modules.

Setting sub tasks for skills assessment

Summative sub-tasks

Approval of sub-tasks should be undertaken recognising that assessment must be broadly equitable, not over-burdensome and in line with the principle that student engagement should be fostered by means other than frequent summative assessments.

Exceptionally, some assessment tasks can consist of two or more sub-tasks which together form a single assessment task.   Sub-tasks fall within the following two categories:-

  • A collection of related, small assessment sub-tasks which form a single assessment task submitted via one deadline date.
  • A collection of related, small assessment sub-tasks undertaken regularly (e.g. weekly lab tests)

If Module Leaders wish to use sub-tasks, these must be approved within the Faculty. Sub-task marks are recorded and managed locally by Module Leaders. The formal university regulations governing extenuating circumstances, referrals and deferrals do not apply to sub-tasks.

Sub Tasks can consist of time-constrained coursework, e.g. phase tests, but not examinations. Examinations must be set as a separate assessment task.

Avoiding bunching
Every year module leaders will need to check the summative task and sub-task assessment data for their modules to ensure information such as title of the assignment and due date/time are correct. This proves usually runs from May to July. The process is supported in the faculties by Student Services who liaise with the academic teams to help validate assessment task information.

Course teams are responsible for avoiding assessment bunching by using the Task Clustering by Course Report to manually check the submission dates for all modules on a course and, where necessary, agreeing alternative submission dates with module leaders. Guidance is here on how to use the report.

Exemptions to the standard model

Standard assessment model and exemptions to the standard assessment model

The standard assessment model is the default position. Exemptions will only be permitted to meet PSRB or essential subject discipline and / or legislative requirements. This will be tested at the outset of course planning and will require support and agreement in principle from Faculty as well as university approval, based on a sound rationale and evidence from the PSRB. Further information about how to request such an exemption can be found on the Exemptions SharePoint site.

  • Standard assessment model: Individual assessment tasks do not have to be passed in order for the module to be passed provided an overall module mark meets the minimum pass mark (40% or 50% for level 7 modules). Pass/fail tasks are not appropriate in the standard assessment model.
  • Exemptions to standard assessment model: These modules can require a task to be passed at the minimum pass mark or be assessed on a pass/fail basis. Each assessment task has to be passed in order for the module to be passed. If the module contains one or more pass/fail assessment tasks, then the module is non-compensatable by default and the necessity for this must be demonstrated when the exemption is applied for.

For more information about how to apply for an exemption, click here Exemptions Guidance

In-module retrieval
What is it?

In-Module Retrieval (IMR) is a coursework  assessment tool for use at task level.  In-module retrieval refers to a feature of a module’s assessment design whereby students who have not achieved the minimum pass mark in an assessment task at the first attempt are given an opportunity of reworking that assessment task. This rework would normally be within a short time after the initial attempt following feedback from tutors. For some assessment tasks it may be permissible for the rework to be undertaken after teaching has ended, if this is the case the rework must be completed prior to the relevant Departmental Assessment Board.

It is mandatory to make in-module retrieval available to students in all level 3 and level 4 modules, for at least one assessment task, normally the first task which carries a substantial weighting. In-module retrieval is optional but encouraged at other levels. Departmental Boards will approve any requests for a level 3 or 4 module to not permit in-module retrieval.

It must be clearly articulated to students where in-module retrieval is available for an assessment task. This will be via the Blackboard site for the module, the course handbook or via the assessment brief.

When IMR is available on a coursework task, it comes into effect if a student does not achieve the required pass mark. Students receive feedback on their initial attempt and then have the choice of re-working the same assessment, with the benefit of the feedback, to improve their work to a pass standard. Tutors marking the re- worked submission are marking to threshold – meaning that they simply determine if the piece is of a pass standard. If so, the student receives a mark caped at the minimum pass mark, or a pass in the case of pass/fail tasks. Students would not receive additional feedback.

If the reworked IMR attempt mark is lower than the original mark, then the original mark will stand and be inputted into SITS (the University’s Student Information system). This mark will be used, together with marks from any other tasks, to calculate the overall module mark.

Students do not have to take up the opportunity of IMR. If they choose not to complete IMR or complete it without achieving a pass standard, they may still be eligible for referral at the end of the module (depending on the outcomes of their other assessment tasks and the compensation status of the module).

IMR should be encouraged for all coursework assessments where there is sufficient time for the IMR submissions to be marked, moderated and marks processed in time for an assessment board. Availability of IMR is specified in the module descriptor, so any task that currently does not have IMR will need to have an application for modification to existing approval to be able to introduce it.

Why use it?

  • IMR allows students to “make good” on their assessment and so avoid referral in their module which can create a burden for students and staff. The student does not complete a new piece of work and the academic marking the work is only required to decide if the work passes the pass mark threshold.
  • Students are required to engage with their feedback on their first attempt and consider what they need to do to achieve the pass threshold.
  • Successful completion of IMR helps to develop student confidence; they know they are progressing several weeks before they would be able to undertake a referral.
  • IMR is a more efficient use of student and staff time compared to the referral process.
  • The use of IMR for coursework assessment tasks at Level 4 (it is a mandatory option for these tasks at this level) supports student transition by supporting their familiarisation with assessment processes and expectations.

What you need to know

IMR is mandatory for all coursework assessment tasks at Level 4 where there is sufficient time to administer it and it is encouraged for all other eligible assessment tasks at other levels of study.

  • IMR is not permitted for students who have already achieved a pass mark on the first submission;
  • IMR is not permitted on examination assessment tasks;
  • IMR is only available if a student has made an attempt at the original assessment task. Students who fail to submit work by the submission deadline, are classed as fail-to-submit and are therefore not permitted an opportunity to have an IMR attempt;
  • Students cannot claim Extenuating Circumstances (ECs) against an IMR attempt. They may claim ECs for the first assessment within the timescales prescribed in the regulations. In cases where  ECs are submitted for the first attempt, but no decision has been made before the IMR deadline, the IMR opportunity is still available. If the EC is then accepted, students would have two options:

o   Take a deferral attempt which would not be capped and would be a new piece of work. IMR cannot be used as a deferral attempt;

o   Decline the EC and take the IMR mark.

  • Academic colleagues should flag IMR attempts when providing final marks so that Student Administrators are aware.

How to use it effectively

Consider the timing of your assessment submissions

There needs to be sufficient time between the receipt of marks and feedback, reworking the assessment and marking the IMR before marks are required for assessment boards.

Consider the type of assessment and it’s suitability for IMR

Choose or design an assessment method that it can accommodate IMR. Some online objective testing (e.g. multiple choice), where a fixed question set provides correct answers for students at the end of the test, is an example of an assessment task that cannot support IMR. It is also unlikely that group work can support IMR.

For more information about In Module Retrieval, take a look at the guidance

Grade based assessment
Traditionally, at university, when individual pieces of work have been marked, the grade has been expressed as a mark from 0-100%. However, recently some universities have moved to a different approach in which the grade given for individual assessments is based on the degree classifications given above. This is called Grade Based Assessment.

A video introducing GBA at SHU is available below:

In Art and Design they have been using grade based assessment for first and second year work. In grade based assessment, the students receive one of eighteen grades, based on final degree classifications, as shown in the table below.

One of the benefits of this approach for students is that they can easily gain an understanding of how well they are doing. For example, if a student is aiming for a first class degree, they can readily see how close they are, based on the grades that they have already received. 

Degree class Grade Numerical equivalent Indicative mark range
First Perfect 1st 100 100
Exceptional 1st 96 99 – 93
High 1st 89 92 – 85
Mid 1st 81 84 – 78
Low 1st 74 77 – 70
Upper second High 2.1 68 69 – 67
Mid 2.1 65 66 – 64
Low 2.1 62 63 – 60
Lower second High 2.2 58 59 – 57
Mid 2.2 55 56 – 54
Low 2.2 52 53 – 50
Third High 3rd 48 49 – 47
Mid 3rd 45 46 – 44
Low 3rd 42 43 – 40
Fail Marginal fail 38 39 – 35
Mid fail 32 34 – 30
Low fail 18 29 – 1
Zero Zero 0 0

Technology enhanced assessment and feedback
Principles of good assessment and feedback apply to the use of technology used to support and enhance assessment and feedback.

Technology can enhance assessment and feedback practices by:

  • providing greater variety and authenticity in the design and delivery of assessments;
  • improving learner engagement by offering interactive and repeatable formative assessments with integrated feedback;
  • providing choice in the timing and location of assessments;
  • supporting and capturing a wider range of skills including simulations, e-portfolios and interactive games, which are not easily assessed by other means;
  • ensuring submission, marking, moderation and data storage processes are efficient and help to reduce the administrative burden;
  • promoting consistency, accuracy and clarity of marking;
  • speeding up assessment processes making assessment and feedback more immediate and richer;
  • increasing opportunities for students to act on feedback through e-portfolios;
  • supporting rich, innovative approaches through the use of creative and social media;
  • supporting online peer- and self-assessment activities and generating feedback;
  • giving students the opportunity to choose how they store and refer to feedback;
  • providing tools that provide evaluation feedback about the effectiveness of the module or course.

In addition, JISC (2011) encourages universities to value the use of technology-enhanced approaches to assessment and feedback. Specifically they highlight the need to ensure that, where technology is used, it:

  • supports the purpose of the task;
  • takes account of students’ technical skills and their diverse needs;
  • takes account of the contexts in which the assessment or feedback takes place;
  • does not unintentionally assess student’s technical or IT skills, exclude some students or make the task unreliable;
  • does not require students to engage with technology and media with which they are unfamiliar.

Technology can

  • facilitate assessment and feedback activities and practices at scale in ways that were traditionally difficult to achieve;
  • be used to monitor students’ progress and provide information for improving learning and teaching;
  • streamline or enhance current provision and should not be used for its own sake.

The following uses of technology can add value:

  • dialogue and communication – online interaction via discussion forums, blogs and wikis can enrich feedback and help to clarify learning goals and standards. Distance and time constraints can be overcome.
  • immediacy and contingency – interactive online tests and electronic voting systems can facilitate learner-led, on-demand formative assessment. Rapid feedback can then correct misconceptions and guide further study;
  • speed and ease of processing – instant feedback can be provided to learners and practitioners, providing information for curriculum review and quality assurance processes;
  • self-evaluative or self-regulated learning – peer assessment, collection of evidence and reflection on achievement in e-portfolios and blogs can generate ownership of learning and promote higher-order thinking skills, in turn improving performance in summative assessment;
  • ‘additionality’ – technology can make it possible to assess skills and processes that were previously difficult to measure, including the dynamic processes involved in learning. Technology can add a personal quality to feedback, even in large group contexts, and through efficiencies gained from asynchronous communication and automated marking, can enable practitioners to make more productive use of their time.

JISC – Transforming Assessment and Feedback