Insights from the Annual Course Reviews

The University of Sussex’s Annual Course Review (ACR) process provides an opportunity for review, reflection and evaluation of the delivery of our teaching and is a key part of the University’s quality assurance and enhancement framework. However, the ACR isn’t just about compliance, it’s also about continuous enhancement. Here we have pulled out just a few of the enhancement initiatives that Schools reported having put in place in 2022/23 to ensure that our courses remain student-centred, engaging and inclusive. 


In the School of Education and Social Work (ESW), the MA/PGDip in Social Work has created an excellent mentoring scheme through which international and global majority home students can access independent mentoring from an experienced Black Social-work educator. The School has also been diversifying reading lists and strengthening inputs on anti-oppressive, anti-discriminatory and anti-racist issues in teaching and learning. ESW has created virtual learning and practice-development workshops on these themes and updated module learning outcomes to make this focus more explicit in assessment. 

In the Science cluster, the School of Engineering and Informatics have sought to increase female representation at applicant visit days and other external engagement activities as part of the School’s strategy to recruit more female students. Also, staff recruitment panels are now gender-balanced with an aim to enhance equal opportunities. These adaptations are intended to increase the female/male ratio in both the student and staff population, something which remains low across the sector. Additionally, the School of Life Sciences have created a BAMESci society which aims to provide support to all BAME students with as a focus on social, education, development, leadership, and communication themes. 


All of our schools reported a strong focus on improving feedback processes. The School of Psychology began running bespoke in-person training sessions on marking and feedback for doctoral tutors, while LPS have produced a document titled ‘Sussex Law School Marking Criteria Guidance’ to help students better understand what is expected with respect to knowledge and understanding, engagement with sources, analysis and application, structure and presentation, and referencing.  

The School of Global Studies are ensuring that all departments are embedding marking criteria within Canvas, as well as explained the criteria in class. Similarly, the School of Life Sciences are ensuring that students are accessing their feedback and are properly aware of all the feedback opportunities that are provided to them and how to best make use of this information to further their development. ESW saw a great improvement in the consistency and clarity of feedback, by ensuring that the feedback provided to students focuses both on strengths and areas for improvement. 

As well as enhancing assessment feedback, we also saw improvements in collecting and acting on feedback from students. The University of Sussex Business School (USBS) created a School-wide feedback series which provided opportunities for staff and students to engage in conversations in an informal setting, supporting a sense of belonging and ensuring that student voices are heard in teaching related matters. Meanwhile the School of Mathematics and Physical Sciences (MPS) have made the decision to switch to early-term questionnaires, rather than mid-term, to allow for feedback to be addressed quickly and in a way that is visible to students.  

Embedding skills 

The School of Engineering and Informatics have been working closely with the Careers and Employability team to continue embedding employability in the curriculum. Within the School, Product Design have created a fantastic independent Canvas site where they engage students and staff with employability matters. MPS run a mandatory careers component which consists of weekly seminars and coursework as part of a Year 2 module. This helps improve the employability of students and the School is considering extending this initiative into Year 1 to enhance the exposure of the students to these skills. 

In addition, Global Studies trialled offering PGT students a dedicated series of academic core skills workshops, led by Director for Postgraduate Taught Programmes, Dr Lyndsay McLean, which were well attended. This has been formalised into a zero-credit module and will be included in students’ timetables in the upcoming academic year. 

In the School of Media, Arts and Humanities (MAH) students have been offered a number of experiential learning events. Each event was designed for a particular subject area in order to foster a wider sense of community and included intensive writing groups and workshops, research celebration days, employability sessions, social events, field trips to the theatre, art galleries, performances, and archives, visits to campus by artists, choreographers, writers, performers, and people from industry, and collaborative projects such as filmmaking. Many of the events that were run had a widening participation and/or employability related aspect, and in certain cases involved students working with community, third sector and voluntary organisations.   

Curriculum changes and diversification of assessment 

Regular reviews of and enhancements to courses has been a central theme. Psychology have implemented several changes to the curriculum this past academic year, including module changes to make courses more coherent and attractive and several new optional Year 3 modules to reflect the growth in faculty numbers. USBS have strengthened the rigour of the internal course review process and are continuing to work towards a completely integrated Assurance of Learning process.  

Central Foundation Year have made a range of changes to modules which have had a positive impact on experience and performance. An example of this is the introduction of a problem-solving activity into weekly workshops that allows students to address challenges that have arisen during that week’s practical work.  

And last, but certainly not least, a number of Schools have sought to diversify the types of assessments that students experience during their studies. In the School of Global Studies, International Development have introduced blogs and podcasts as forms of assessment, while Geography have been using learning portfolios, policy briefings, lab reports, field reports, concept notes and presentations to support learners to develop transferable skills. USBS have been increasing the use of innovative assessment modes that enable students to evidence learning in various ways, this includes the use of podcasts and business reports. Finally, in LPS, Sociology and Criminology ran a series of alternative assessment workshops for staff in the department which inspired a number of changes to module assessments. 

Tagged with: , , , , ,
Posted in Blog

Episode 1

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast runs monthly, and each month is centred around a particular theme. The theme of our first episode is ‘scholarship leave’, and we will hear from Sue Robbins (Senior Lecturer in English Language) and René Moolenaar (Senior Lecturer in Strategy) as they discuss the experiences and outputs of their recent scholarship leave. 

Sue Robbins  

Sue Robbins is Senior Lecturer in English Language and Director of Continuing Professional Development in the School of Media, Arts and Humanities.  

René Moolenaar 

René Moolenaar is Senior Lecturer in Strategy at the University of Sussex Business School and Associate Professor at the University of Queensland. 


Listen to the recording of Episode 1.


Sue Robbins 

Robbins, S. (2024) Develop Your English: with the United Nations Sustainable Development Goals. Open Press at University of Sussex. Available at:   

Lütge, C., Merse, T., and Rauschert, P. (Eds.) (2023) Global Citizenship in Foreign Language Education: Concepts, Practices, Connections. Routledge. Available at:  

René Moolenaar 

Gardner, S.K. (2021) ‘Faculty learning and professional growth in the sabbatical leave’, Innovative Higher Education, 47(3), pp. 435–451. doi:10.1007/s10755-021-09584-4.   

Macfarlane, B. (2022) ‘The academic sabbatical as a symbol of change in higher education: From rest and recuperation to hyper-performativity’, Journal of Higher Education Policy and Management, 45(3), pp. 335–348.  doi:10.1080/1360080x.2022.2140888.  

Sibbald, T. and Handford, V. (2022) The academic sabbatical: A voyage of discovery. Ottawa, Ontario: University of Ottawa Press.  

Zahorski, K.J. (1994) The sabbatical mentor: A practical guide to successful sabbaticals. Bolton, MA: Anker Pub. Co. 

Tagged with: , , , , , ,
Posted in Podcast

Using video feedback to engage students with marking criteria

Clare Harris, Senior Teaching Fellow in Creativity and Design, and Alexandre Rodrigues, Lecturer in Product Design, explain how they implemented the use of screen recordings to enhance their feedback.

Clare has a background in design and extensive experience working in the creative industries. Her teaching primarily focuses on creative thinking, processes, and practices. Since 2016, she has been a part of the Product Design Team at the University of Sussex, where she teaches Drawing for Design, Experience Prototyping, Interaction Methods, and Toy and Game Design. Clare is also the module convenor for the Final Year Projects. Additionally, she has taught Design at the University of Southampton (Winchester School of Art), Brighton University, and the Open University. Clare lives in Hastings with her partner, her cat, Pywacket, and her dog, Moss. Her hobbies include pottery and border Morris dancing, and is proud to be a member of the Hastings Punk Choir.

Alexandre is a Lecturer in Product Design in the School of Engineering and Informatics, Department of Engineering and Design. He received his PhD from Nottingham Trent University in 2019. He is a Sussex Education Award winner. His research thesis in sustainable production and consumption contributes to understanding how the social facet of socio-technical transitions can help replace the car culture status quo and provide opportunities for nudge policy action using Cultural Theory and the Theory of Interpersonal Behaviour. His current educational interests are in using Virtual and Augmented Reality as tools for teaching and learning in Product Design. Alexandre is a SCITECH C-REC committee member.

What we did

We started to use video feedback for Final Year Project supervision with our Product Design students. When marking we create a screen recording as we go through the student’s Canvas submission, giving comments verbally and directing students to different areas of their work on the screen. Within Canvas you can click on the attached marking rubric which appears next to the assignment. You can then use this to structure your feedback, taking it section by section as your marking so the students are basically seeing what you see when you’re marking.

We really talk them through the process as we mark, in terms of what they did well, where they didn’t do quite so well. As you are screen recording you can also go back through the module’s Canvas site remind them of, for example, one of the Padlet exercises they’ve uploaded to week four. It just means that the feedback your giving becomes a lot clearer a lot quicker.

When compared to written feedback, video feedback is more nuanced to the individual submission and that student’s needs. It also allows for a greater degree of personalisation.

Why we did it

Initially, about three years ago, we shared a final year student who was neurodiverse and we had to try to be really explicit about what we expected of them and very clear, but obviously keeping a friendly tone. So we decided to experiment with video feedback for that and realised the benefits quite quickly.

It felt like you could say an awful lot more and were able to say it in a very nice, encouraging way. We could actually pinpoint bits of the submission that needed improvement, conveying a lot more information succinctly.

Moreover, if English is not your first language it can sometimes be a struggle to write the right feedback with the correct tone. It can be worrying thinking that your writing might be misinterpreted and perceived as being more negative than was intended. There is great benefit to being able to hear an encouraging or more positive tone.


Time is needed to have a look around and experiment with different tools. You want something that is going to be easy to use and to edit if needed. We use ScreenFlow and PowerPoint, but there are so many different tools available. Your School’s Learning Technologist will be able to help.

One key element to have in place is a detailed rubric/marking scheme beforehand. This will allow you to stay focused while you record and helps to maintains consistency across your cohort. Your School’s Academic Developer will be able to support you in creating or updating your marking schemes.

Impact and student feedback

We’ve had some really positive feedback from students. One thing that has happened is that students have responded to our feedback, they’ve actually said thank you for the feedback. Whereas normally your feedback goes out there and then that’s kind of it, it’s very much one way. Now there’s a dialogue between us and the students, they’re giving us feedback on feedback!

We’ve also noticed there’s less confusion around why students got a particular grade, or what they haven’t quite got right. As we are presenting their feedback alongside the criteria students seems to have a little better understanding of it. Now students are reading and engaging with the marking criteria ahead of their assessments.

Future plans

We intend to continue using video feedback across the different modules that we teach and to extend this to additional modules and assessment modes.

Top tips

  • Make sure that you have a rubric/marking scheme in place because that’s the thing that’s going to keep everything consistent, focused and fair.
  • Personalize your feedback, address the student by name and reference specific aspects of their work to show that you’ve engaged with their submission.
  • Use whatever software works best for you.
  • Keep it manageable, around 5-7 minutes, to maintain the student’s attention.
  • Speak calmly and keep it positive, the tone is really important.


Tagged with: , ,
Posted in Case Studies

Sussex Education Festival 2024 Programme

Please join us for the second Sussex Education Festival, an event for anyone involved in delivering education at Sussex. The event will be held over two days. You can attend the Festival in-person (9.30-4pm on 10 July) in the Woodland Rooms at the Student Centre and/or online (10-3pm on 11 July). Please see the programme for further information. 

The Festival will consist of a number of different session types, including panel discussions and interactive workshops, which are focused around themes such as alternative assessments, student engagement and wellbeing, generative AI and environmental sustainability. We look forward to seeing you there to celebrate all the amazing work that goes into teaching, learning and assessment here at Sussex! Please sign up via our registration form

Posted in Blog

Guessing and Gender Bias in Multiple-Choice Quizzes with Negative Marking

Dr Matteo Madotto, Lecturer in Economics, University of Sussex Business School.[1]


When designing multiple-choice quizzes (MCQs), an important decision to make is whether or not to apply negative marking to incorrect answers. The main rationale for penalizing wrong answers is to discourage “guessing”, i.e. situations where students are very uncertain about the correct alternative and decide to answer more or less at random in the hope of getting it right by chance. Indeed, without negative marking rational students would have an incentive to attempt all questions, even those where they have absolutely no clue about the correct answer, since they would always get a positive score in expectation (e.g. Budescu and Bar-Hillel, 1993; Prieto and Delgado, 1999; Bereby-Meyer et al., 2002; Betts et al., 2009; Lesage et al., 2013; Akyol et al., 2016). On the other hand, one of the main concerns with negative marking is that it might end up being discriminatory against female students. Indeed, evidence suggests that females tend to be more risk averse than males, which, for an equivalent level of knowledge, may lead them to answer less questions and be unfairly disadvantaged when negative marking is applied (e.g. Burton, 2005; Espinosa and Gardeazabal, 2010; Lesage et al., 2013; Akyol et al., 2016).

In this short article, I present the results of five MCQs with negative marking taken by 900 undergraduate students at the University of Sussex Business School between 2021 and 2023, and analyze how these tests performed along the two main dimensions highlighted above, i.e. guessing and gender bias.

All quizzes contained 20 open-book questions, each of which had 4 possible alternatives with only one correct answer. The order of both questions and answers was randomized to reduce collusion among students. Each correct answer was worth 5 marks, each unanswered question was worth 0 marks, while each incorrect answer was worth -2 marks. The overall score was computed as the sum of the marks, with a minimum floor of 0. Students were made aware beforehand of this marking scheme. They, however, were given no strategic advice on when they should attempt a question, so as not to bias their choices in either direction. The negative marking of -2 ensured that a student with absolutely no clue about the correct answer to a question, i.e. a student who assigned an equal probability to each of the alternatives, would get an expected mark of approximately 0 (specifically -0.25) by answering the question, as is typically considered appropriate in the case of MCQs with negative marking (e.g. Budescu and Bar-Hillel, 1993; Prieto and Delgado, 1999; Bereby-Meyer et al., 2002; Lesage et al., 2013).

Guessing and gender bias

To determine whether or not random guessing remains an issue even when negative marking is in place, we look at the average ratio between the percentages of students who selected the most popular and the second most popular incorrect alternative per question, and that between the most and the least popular incorrect alternative per question. If students assigned an equal probability to all alternatives and answered completely at random, then both these ratios would be approximately equal to 1. As can be seen in Table 1, however, this does not seem to be the case at all for both males and females, regardless of the level of difficulty of the test.[2] On the contrary, in most questions there are both a popular incorrect alternative, which appears plausible to a relatively large number of students, and a very unpopular one, which is chosen by few of them. Specifically, from Table 1 we see that in four of the five tests the most popular incorrect alternative per question is chosen by a percentage of students which on average is about 4 to 10 times larger than that of the second most popular one, and 6 to 16 times larger than that of the least popular incorrect alternative.[3] Only in one quiz the two ratios are substantially lower (see more on this below). Of course, here it is not possible to determine how much of this is due to the presence of the negative marking itself; however, it appears that one of the main apprehensions surrounding MCQs, i.e. guessing by students, is rather limited when such marking scheme is implemented. Those students who decide to answer and choose the wrong alternative seem to do it out of incorrect knowledge rather than no knowledge at all.

Test numberNumber of malesNumber of femalesAverage scoreScore standard deviationAverage ratio between % of most popular and second most popular incorrect answers – MalesAverage ratio between % of most popular and second most popular incorrect answers – FemalesAverage ratio between % of most and least popular incorrect answers – MalesAverage ratio between % of most and least popular incorrect answers – Females
Table 1

Turning to the second main question of the article, we analyze whether MCQs with negative marking are discriminatory against females. Data on the gender of individual students were not available, therefore we used students’ names as a proxy for their gender. Summary statistics and two-tailed t-tests for total scores are shown in Table 2, while those for the number of unanswered questions are in Table 3. In four of the five quizzes, both scores and unanswered questions of females were not significantly different from those of males at any conventional significance level. In one quiz, however, females performed worse than males at a 1% significance level and left a larger number of questions unanswered at a 10% significance level.

Test numberMale average scoreFemale average scoreMale score standard deviationFemale score standard deviationtp-value
Table 2
Test numberMale average of non-responsesFemale average of non-responsesMale standard deviation of non-responsesFemale standard deviation of non-responsestp-value
Table 3

A possible trade-off

As Tables 2 and 3 show, one of the most common concerns about MCQs with negative marking, i.e. that they may discriminate against female students, does not appear substantiated in most of our cases. However, comparing the results in these two tables with those in Table 1, we see that the only quiz in which females performed significantly worse than males and left a larger number of unanswered questions (namely test 5) is exactly the one in which students seemed more uncertain about the correct answers, as measured by the relatively low values of the two ratios in Table 1. It may be therefore the case that gender bias occurs precisely in those situations where random guessing is more likely and hence negative marking would be more useful. This may be because differences in the risk attitude of students start playing a role exactly when the latter are sufficiently uncertain about the correct answer, i.e. when they assign similar probabilities to all alternatives.

To avoid this trade-off, it may be sensible to design questions such that at least one of the alternatives would appear highly unlikely to those students who possess a minimum level of knowledge, allowing them to assign higher probabilities to the remaining options. In this way, negative marking would discourage random guessing by those students with a very low knowledge level, without excessively reducing the incentives to answer of the more knowledgeable students, regardless of their risk attitude.


Akyol, S. P., Key, J. and Krishna, K. (2016) “Hit or miss? Test taking behavior in multiple choice exams” NBER Working Paper 22401.

Bereby-Meyer, Y., Meyer, J. and Flascher, O. M. (2002) “Prospect theory analysis of guessing in multiple choice tests” Journal of Behavioral Decision Making, 15(4), 313-327.

Betts, L. R., Elder, T. J., Hartley, J. and Trueman, M. (2009) “Does correction for guessing reduce students’ performance on multiple-choice examinations? Yes? No? Sometimes?” Assessment & Evaluation in Higher Education, 34(1), 1-15.

Budescu, D. and Bar-Hillel, M. (1993) “To guess or not to guess: a decision-theoretic view of formula scoring” Journal of Educational Measurement, 30(4), 277-291.

Burton, R. F. (2005) “Multiple-choice and true/false tests: myths and misapprehensions” Assessment & Evaluation in Higher Education, 30(1), 65-72.

Espinosa, M. P. and Gardeazabal, J. (2010) “Optimal correction for guessing in multiple-choice tests” Journal of Mathematical Psychology, 54(5), 415-425.

Lesage, E., Valcke, M. and Sabbe, E. (2013), “Scoring methods for multiple choice assessment in higher education – Is it still a matter of number right scoring or negative marking?” Studies in Educational Evaluation, 39(3), 188-193.

Prieto, G. and Delgado, A. R. (1999) “The effect of instructions on multiple-choice test scores” European Journal of Psychological Assessment, 15(2), 143-150.

[1] I would like to thank Ana Carolina Tereza Ramos de Oliveira dos Santos for her excellent work as research assistant.

[2] It is often hard to properly calibrate the level of difficulty of an MCQ, especially when this is administered for the first time, and indeed one of the test turned out very difficult for students. Of course, similar issues can occur with or without negative marking. The presence of the latter, however, tends to amplify the impact of miscalibration to a certain extent.

[3] The average ratios in Table 1 can actually be thought as lower bounds, since these are computed excluding those questions for which the denominator of the ratio would have involved an answer not chosen by any student.

Tagged with: , , , ,
Posted in Articles

Measuring educational gain through Assurance of Learning (AoL)

Farai is Associate Dean (Education & Students) in the University of Sussex Business School and Professor in the Department of Economics. She has several years’ higher education teaching experience in statistics, development economics and other applied economics topics. She has also worked for several international development agencies in the past.

There is increased emphasis in the UK higher education sector on measuring the impact of the education provided by universities to students on acquiring the knowledge, skills, and other competencies outlined in degree programmes. In 2023 the Office for Students asked education providers to present what ‘educational gains’ they intend their students to achieve, what support they offer students to achieve them, and what evidence they have that students are succeeding in achieving these. While the Teaching Excellence Framework (TEF) focuses on measures of continuation, completion and progression, educational gain also encompasses areas such as knowledge, skills, personal development and work readiness. However, the definition of educational gain is quite open-ended and leaves room for providers to conceptualize and articulate their interpretation of it in practice.

Accreditation is an important process for Business Schools globally. As part of AACSB (Association to Advance Collegiate Schools of Business) accreditation Business Schools must demonstrate they have a systematic process of Assurance of Learning (AoL). Assurance of learning (AoL) is about demonstrating, through assessment processes, that students achieve learning expectations for the programs in which they participate. It involves the use of robust, systematic and sustainable assessment processes designed to improve the learning of students. It is about process improvement and can also be the key driver of curriculum change.

Curriculum alignment

One of the early steps in the Assurance of Learning (AoL) process is curriculum alignment, where school learning/competency goals and course[1] learning objectives/outcomes are mapped on the curriculum. The focus here is on the common learning experience of students enrolled on the course. Curriculum alignment is important as the mission of the school (that is, what the school does) must align with the education the school offers. Hence, it is crucial to ensure that the school’s competency goals (e.g., sustainability, responsible leadership, collaboration) are reflected in the curriculum in a manner that allows students to develop that skill, knowledge, or attitude.

Step 1: Conceptual

An important initial step of the AoL process is articulating the overall competency goals of the school. For example, the University of Sussex Business School (USBS) has five competency goals which stem from its mission statement, namely:

  1. Demonstrate appropriate discipline-specific knowledge using relevant methods and technologies
  2. Work effectively in a team
  3. Be responsible students and citizens
  4. Communicate effectively with different audiences
  5. Demonstrate the ability to work independently and apply critical thinking skills to develop innovative solutions

We expect the education we offer our students to enable them to acquire the above competencies by the time they graduate.

Step 2: Operational statements

The USBS offers many courses (i.e., degree programmes). Each course has its own learning outcomes/objectives. As part of AoL course learning objectives are mapped to the school competencies outlined in step 1 making explicit the curriculum alignment, that is, the relationship between the Business School’s overall goals and objectives and what is offered to students at the course level. A course typically has additional bespoke learning objectives that may not necessarily map one-to-one with school competencies which serve to differentiate the differences across courses.  

Step 3: 1st Measurement (opening the loop)

The next step of AoL is measuring whether we have delivered on our course learning objectives, that is, have our students acquired the course/degree level competencies we promised to deliver? Thus, an important aspect of AoL is to have a benchmark of what constitutes a course cohort having satisfactorily met the learning outcomes. For example, 80% of students achieving a pass mark on a course learning outcome at first attempt can be considered satisfactory performance. ‘Exceeding’ and ‘meeting’ the learning outcome are also differentiated. The former refers to a distinction mark while the latter refers to any pass mark below distinction. The benchmark for satisfactory performance does not have to be generic across courses. What matters is the rationale behind the benchmark.

Direct measures

Assessments are a key indicator of the extent to which students have learned what we taught them. To determine course cohort performance on a course learning outcome select core module assessments are mapped to each course learning objective. In some cases, there may be capstone assessments offered at the course level which align with specific course learning outcomes. In addition, either formative or summative assessments could be used to measure learning outcome performance, or a combination of the two. Where there is no suitable core module, optional modules that are representative of the course cohort can be used instead.

In practice a whole assessment component can be a suitable direct measure.  For example, an essay assessment can be a suitable measure for the learning outcome “Demonstrate an advanced understanding of management information systems using a range of concepts, theories and technologies”. However, in other cases, only certain aspects of the essay may be relevant to ascribe to a particular learning outcome. In this case, the marker must distinguish within the marking what parts of the assessment relate to the course learning outcome. For example, a dissertation may be a suitable assessment to measure the following learning outcome: “Understand the role of ethics in the evolution of innovation, change and contemporary issues in management”. However, it may not be appropriate to ascribe the whole dissertation mark to this learning outcome. Rather, only a component of it (e.g., 10 marks out of 100) may be related to that learning outcome. This implies the marking rubric must capture that component. Moreover, a student failing the overall dissertation may have not failed the course learning outcome as they may have achieved a pass mark for the component associated with the course learning outcome, and vice versa. It is important to understand that the performance here is in relation to the course learning outcome which may be attached to a whole module, an assessment component, or an assessment sub-component. Also, a big part of using assessments is aggregating the data at course level for assessments shared across different courses. This is to ensure cohort performance is captured separately for each course.

Indirect measures

The extent to which course learning outcomes have been met can also be measured using qualitative metrics (e.g., employer surveys, NSS scores, module evaluation questionnaires, alumni surveys, advisory board focus group discussions, etc.), referred to as indirect measures. Indirect measures can be a useful complement to direct measures and if used appropriately can provide valuable explanations to the findings from the quantitative results. Caution would need to be taken to ensure any sampling procedure for collecting qualitative data yields a representative sample.

This round of data collection is referred to as “opening the loop” in AoL language.

Step 4: Using the data to improve student learning

Where the target has not been met, it can be investigated why this is the case and what the right ‘intervention’ to achieve improvement is. There are two types of improvement: (i) process improvement (e.g., improving how to teach/assess, when to teach/assess; ‘systems’, etc.), and (ii) curriculum improvement (i.e., improving the syllabus, content, skills, knowledge, and competencies taught).

AoL is about being improvements oriented rather than compliance oriented. It is not a data collection project but a data usage project. That is, how can the data we have collected be used to help improve the learning of students. How can the data be acted on?

If students have not developed the competencies faculty thought they would have from the curriculum taught, faculty can reflect and develop learning experiences that can be used to improve student performance on the learning goals. It may take some trial and error, but over time students can improve their skills in specific competencies because of thoughtful, data-driven curriculum development and management. The objective of AoL is to assure student learning.

Step 5: 2nd Measurement: closing the loop

This step involves a second data collection exercise akin to that described in step 3, e.g., one year later. From this second round of data collection, by looking at the outcomes it can be determined whether the interventions in step 4 achieved the desired effect. That way the loop will have been closed by having two data points (measurement before and after intervention). “Closing the loop” does not imply improvement in performance has been achieved. It is simply about having two data points to compare.

In the case where no intervention was required when the loop was opened, we can assess whether this is still the case at the point of closing the loop. This process repeats over time in that closing the loop is akin to opening the loop for the next period (see illustration below).

Assurance of Learning (AoL) process

Continuous improvement process

Using the approach discussed above, we can continuously monitor our courses and the extent to which course learning outcomes are being met and how effective interventions are. We can also learn what we are doing ‘well’ and find out how and why this is. The data can also help us identify  whether there is a need to review learning outcomes to make them more ‘challenging’. For example, if we are continuously exceeding targets on the same learning outcome, we may need to adjust the benchmark. Or, we may decide to revise the learning outcome to offer a new competency to our students given we now have consistent satisfactory performance on the existing competency.

The AoL process is a continuous improvement process. The gap between opening a loop and closing a loop can be anything up to a six year gap. In the USBS we have adopted a one year gap for now, until we gain enough traction to widen the gap. Eventually this becomes a ‘self-driving’ process, enabling us to manage our curriculum effectively. The objective is to have a culture around what our students are learning, how we improve that learning, and how we work together to make that happen.

Effective AoL should lead to an improvement in student learning and raise the quality of graduates. AoL can also go a long way in addressing the intensifying pressure to develop data-driven responses to public demands for justification of investment in higher education.

Lastly, it is important to note that most educators already undertake the AoL exercise as part of their responsibilities in teaching and assessing students, making improvements based on past performance, and reflecting on current practice to inform future teaching and assessment. The AoL framework enables these processes to be captured in a more systematic, robust and sustainable manner. It also provides a holistic view at the course level and facilitates continuous curriculum and process management improvement.

[1] In this article, course refers to a degree programme.

Tagged with: , ,
Posted in Blog

A student led session on reviving curiosity and student engagement

Ismah Irsalina Binti Irwandy (left) and Liv Camacho Wejbrandt (right)

First year student reps, Ismah Irsalina Binti Irwandy and Liv Camacho Wejbrandt, co-developed and delivered a workshop for lecturers at the Engineering and Informatics Teaching and Learning Away Day. Here they explain the part they played in developing the session, insights from their survey of students and staff on engaging teaching, and what they learned from delivering the session.  

Ismah and Liv are happy to share the resources used in the session and to advise staff and students from other schools on developing their own activities. 

What we did

In November 2023 we responded to a call from our school’s Director for Teaching and Learning (DTL), Dr Luis Ponce Cuspinero, asking for student representatives to develop and deliver a session on reviving curiosity and student engagement for the Engineering and Informatics School teaching and learning away day in January 2024.  

How we did it

Luis started by sharing the aims of the 50-minute session, which were to help lecturers understand the kinds of approaches to teaching and learning Engineering and Informatics students found most engaging and to encourage them to think about how they might better encourage their students’ curiosity and provide even more engaging teaching sessions. Ismah, who was the first to sign up, primarily worked with Luis on developing the questions and activities for the session. Liv joined a little later and led more on developing the presentation and delivery of the session. Planning meetings with Luis ran for between 15 to 30 minutes each week, over around 5 weeks. 

The first step was to develop a survey for Engineering and Informatics students to find out the kinds of teaching they find most engaging. Ismah brainstormed a long list of questions and, with Luis’ help, whittled them down to five, which were then put onto Poll Everywhere and sent to all students via the School Office.  (The questions are provided below). 

Our approach to designing the session was to make it interactive and engaging and to demonstrate how we like to be taught! The final session comprised three sections: 

(1) The ice breaker ‘reflective activity’: 

We developed a ‘pass the parcel’ style game, which was designed to get everyone energized and in the mood. Each table was given a bowl of folded paper slips, each printed with a prompt. Some were really simple, like ‘Describe a teacher that inspired you’, or, ‘Share an ‘aha’ moment you’ve have had while teaching’. Others were a bit more challenging, e.g.: ‘how would you re-design your module to make it more engaging?’ or, “Describe one of your modules as if to a 12-year-old”.  

On the day, we played music as the bowl was passed around the table and whoever was left holding it had a minute to pick out a slip and share their answer. At the end of the section, we asked people someone from each table to volunteer to share with the room their response to one of the questions.  

(2) The ‘How well do you know your students’ activity: 

We used Poll Everywhere to ask each of the five student survey questions to the room. After each question we reviewed the lecturer responses then shared the results from the student survey and briefly picked out where there were similarities and differences.  

(3) The ‘Embedding curiosity and engaging students’ activity 

We wanted to ensure lecturers were given a chance to apply insights from the first two activities so we then asked each table (team) to work together to embed curiosity and student engagement into a module.  One volunteer (the leader) from each table was to describe in brief (3 minutes) one of their modules, the teaching methods, type of assessment, and how feedback is provided. The team then had to come up with ideas/suggestions of how the module can be changed in order to make it more engaging and inspire curiosity in relation to: 

  • Teaching delivery (teaching methods) 
  • Assessment types 
  • Providing feedback 

We gave them 10 minutes to discuss then opened up the floor for team leaders to summarise their proposed changes.  

How it went

We got close to 70 responses to the student survey by the time of the away day (and have had more since!). We think it helped that we wrote the email and insisted the poll was at the top of the message (please do this poll – it will take 2 minutes) followed by the explanation.  

On the day the session went well. We played to our strengths (Liv is used to being on stage so took the lead) but it was really good to be doing it together.  We were concerned about balancing being fun and respectful, while also teaching challenging our lecturers. Happily, the audience were positive and the active approach to the session made it easier for us overall. However, it also meant we had to deal with unexpected outcomes and be confident in encouraging responses from the tables.   Also, while there were a few surprises in the outcomes of the student survey (including for us), it was great to see that there was also a lot of overlap and common ground.  

Liv concluded by impressing on lecturers to show their own love for their subjects and, for both of us, it was a rare opportunity to be able to say something we feel deeply about to lecturers.  

After the session we received lots of positive comments and had some great conversations, including with one lecturer who spoke with us for a long time asking about making his lectures more engaging. Also, we got a free lunch!  

Top Tips 

Our tips for other students are: 

  • It is definitely worth doing. Although it was a commitment at a busy time (we were studying for exams while developing the session), we felt the session had an impact and it made us feel like proper student representatives, particularly as, being first years, we hadn’t had many rep meetings by that point. 
  • You don’t have to start from scratch! We’re really happy for others to use and build on our approach and to chat with students and lecturers from other schools (see details of the activities from the session below and how to contact us).  

Comments and feedback 

I was incredibly impressed by Ismah and Liv’s contribution to the content and delivery of this session and have been busy encouraging Directors for Teaching and Learning from the other Sciences Schools I support to follow suit with their own students. My only regret is that Ismah and Liv’s session didn’t kick off the Teaching and Learning away day because it was a brilliant example of an engaging and active learning session which brought real energy to the day while providing that all-important student perspective.” (Dr Sam Hemsley, Academic Developer) 


Pass the parcel questions

Student survey questions


Please direct all queries to Luis Ponce Cuspinera.

Tagged with: , ,
Posted in Case Studies

Behold the Seminars: Reflections on Student Feedback

Maria Hadjimarkou is a Lecturer in Biological Psychology at the University of Sussex School of Psychology.  She is a Fellow of the Higher Education Academy and a member of the SEDA Community of Practice for Transitions. She has several years of experience in Higher Education in the UK and abroad. At Sussex, Maria is involved in activities that promote public awareness of the role of sleep in health and wellbeing and encourages her students to get involved in scholarship activities such as co-authoring articles on sleep and wellbeing for young readers.  

It is challenging for students to feel part of a community when they find themselves in a large amphitheatre among hundreds of other students. Convenors of these large modules like me acknowledge this as the downside of large cohorts. But here come the seminars! Based on student feedback, seminars can become instrumental in shifting things around. 

A feedback survey was launched during the second and third week of the Spring term in a large second-year core module. The module consisted of weekly lectures and bi-weekly seminars that focused on features of the material delivered in a lecture the week before. For these seminars, the students were split into groups of a minimum of 20 and a maximum of 50 students. The survey included module-specific questions but also tapped into various aspects of student experience. 

Based on students’ responses it became clear that we should be investing more in our seminars as they have the potential to be transformative. The overwhelming majority (84%) of the students who took part, reported that the seminars were a positive experience and 74% reported that the seminars offered the best opportunity for them to interact with their peers and to develop a sense of community. As they pointed out, interacting with each other may not come naturally to some, but if the conditions are right, they will discuss ideas and experiences and feel connected. 

In addition to encouraging peer interaction and a sense of community, seminars were identified by students as helpful in understanding the lecture material and gaining a broader understanding of the concepts covered in the lecture. They commented on the seminars saying that they were rewarding and interesting and used adjectives such as ‘great’, ‘fun’, ‘thought-provoking’, ‘inclusive’ and even ‘excellent’. 

Of course, not all seminars are created equal, and this is something that also came through in the survey as students made references to other seminar experiences that were not useful or fun. So, it is up to us the convenors, to structure seminars in a way that will foster interaction and inclusion. Good seminars have the potential to engage students and enhance their understanding of the lecture material as well as the greater context in which the material relates to, as in how it may apply to society or a particular field. Moreover, seminars allow students to express themselves and interact with their peers in a relaxed environment. It has been argued that seminars may help to ‘level the playing field’ in the sense that they help eliminate any disparities between students who face disadvantages (Betton & Branston, 2022). In addition, attending seminars has been linked to better student performance (Betton & Branston, 2022; Marburger, 2001; Stanca, 2006). 

Based on student feedback from this survey, successful seminars need to have a few key ingredients:

  1. Appropriate readings in terms of both quality and quantity: too much material or readings that are too difficult are likely to demotivate students and result in a negative experience or disengagement with the material altogether.
  2. Appropriate activities which students will find fun and at the same time interesting, as they get to explore material that is relevant to the lecture, and beyond.
  3. Approachable tutors. Their approach, energy and demeanour are crucial, and they can greatly influence the climate in the room and the degree to which students feel comfortable to participate or not.
  4. All the seminar components (i.e. structure, activities etc) should allow space and time for interaction in a relaxed environment, which is what ultimately makes seminars fundamentally different from lectures.

So, it seems that the humble 50-minute seminars may hold the key to a lot of the ‘plagues’ that we have been facing in Higher Education, especially following the Covid-19 pandemic and the general drop in student engagement. It is worth our time to plan and structure seminars carefully, as they may be the unsung hero of these large cohorts such as mine. Moreover, feedback surveys are vital in helping us understand how students perceive our teaching approaches, so that we can adjust and steer our efforts towards more effective learning and better teaching experience.


Tagged with:
Posted in Blog

Assessment in a world of Generative AI: What might we lose?

Dr Verona Ní Drisceoil, Reader in Legal Education, Sussex Law School


For the most part, assessment in higher education is viewed in the negative as opposed to the positive. It is something to be endured, worked through, marked and managed. Assessment causes significant anxiety and stress for students and staff alike. Amid dealing with a cost-of-living crisis and increased mental health challenges, students are under significant pressure to achieve a ‘good degree’ to be able to progress to further study or work. For teachers and professional service staff, the pressure to mark and process hundreds of submissions in short timeframes makes assessment an incredibly challenging period. In addition, most higher education institutions in the United Kingdom score poorly on assessment and feedback in the National Student Survey (NSS), making assessment a major headache for university management teams (see also Harkin et al, 2022). And on top of all of that, we now have generative AI to contend with and the challenges that brings for assessment.

In this short article, I want to offer some reflections, and provocations, on the framing of assessment in higher education. Specifically, in thinking about what might be lost in the new reality of generative AI, I propose that we should think about, and indeed frame, assessment from the perspective of empowerment. In this respect, I advocate for assessment as, and for, empowerment. In an ideal world our assessments, notwithstanding the pressures mentioned above, should empower students, or at least have the potential to empower. They should encourage our students to be authentic, to show agency, to grow in confidence, and develop a range of transferable skills including, in particular, evaluative judgement (See case study assessment example). In this regard, assessment as empowerment as a conceptual frame builds on “assessment for learning” (Sambell, McDowell & Montgomery, 2013) and “assessment for social justice” (McArthur, 2023). Could assessment designed for empowerment, I ask, make life better for everyone; for teachers, for students and even our NSS scores?

The article will begin by reflecting on ‘assessment’ and ‘empowerment’ in a world with generative AI before then offering some initial thoughts on what assessment as, and for, empowerment might look like. I will conclude with some takeaway questions that might help us to think about assessment more meaningfully as we navigate the impact of generative AI on education and on life more generally. In contrast to the dominant narratives circulating on generative AI (on productivity, on efficiency, on saving time and making money), I question what might be lost in all of this in terms of humanity (see further Chamorro-Premuzic (2023) and strongly encourage a deeper questioning and critique of generative AI and what it means for future generations. This does not mean I am a Luddite and afraid to engage with AI (yes, I know AI is here to stay!) but rather that I want to maintain a critical standpoint. I want to focus on value – and what matters in life and in education. How is AI changing our lives, values, and fundamental ways of being? Drawing on the work of Bearman et al. (2024:1), I too argue that in an educational context we have a collective responsibility to ensure that humans (our students) do not relinquish their roles as arbiters of quality and truth.

Reflecting on the value of assessment

Without question, the ever-expanding presence of generative AI has challenged us as teachers and educators. It has made us uncomfortable. We no longer, to quote Dave Cormier, have the same power in assessment. This is unsettling. The presence of generative AI forces us as teachers to reflect on our roles, on how we design teaching and how we design assessment. It forces us, if we take the opportunity, to self-assess and ask: what is the value of assessment? (See further McArthur, 2023; Cormier, 2023) What do we value as teachers and what skills do we want our students to achieve through assessment? How can we empower our students through assessment? This questioning, I argue, should not start with how to design an assessment that will beat generative AI. For me, that is not a pedagogically sound start point. Moreover, it’s completely pointless as “the frontier moves so fast” (Dawson, 2024). I would encourage all teachers to stop for a moment to think about what you really want students in your module/discipline to leave university with? This is a great opportunity to really reflect on that question and to go back to basics as it were. Not every module should, I argue, be trying to teach everything to, and with, generative AI in mind. That, for me, is a dangerous path to take. As academics and critical scholars – in universities – let us remain critical. Ask questions of the impact of using these models, of the impact on humanity and the impact on learning through process and doing. Challenge the status quo. Who is driving the AI narrative and why? Why should we care as educators?

Empowerment in Assessment

According to the Oxford English Dictionary, empowerment speaks to agency, autonomy, and confidence. It notes that empowerment is “the fact or action of acquiring more control over one’s life or circumstances.” To empower someone is “to give (a person) the means, ability, or strength to do something”; to enable them. For me, assessment should be about offering a space for growth, to develop skills through process, to feel empowered through doing. This concept of assessment for empowerment can be seen to build on the concept of assessment for learning mentioned earlier. For Sambell et al. (2013), assessment for learning is focused on the promotion of effective learning. They note that, “what we assess and how we assess indicates what we value in learning – the subject content, skills, qualities and so on.” (Sambell et al. 2013:8). Assessment then should promote positive, and empowering messages about the type of learning we require (Sambell et al. 2013:11). It is focused on process and not just product.

As I write this piece, I am wondering whether one should even mention empowerment in the same sentence as generative AI. Perhaps one should. Some argue that using generative AI is empowering. Many have noted that generative AI helps you get started, it builds confidence etc. Quite literally, these large language models put words on the page – and very quickly at that. It is hard not to be tempted by these tools. We have also heard that many students (36% in a survey of 1200 students) use generative AI as a personal tutor (HEPI, 2023). One might argue that there is agency and growth here. Perhaps. But is it on a superficial, artificial level? Is it a matter of degree? Does it matter? I think it does and should.

Yes, generative AI may save a great deal of time on a task and to produce text, to put words on the page that speeds up a process but then where is the learning by doing? Is the purpose of assessment, of higher education to now support students to be able to use a generative language learning model with no authenticity, and questionable accuracy. Generative AI platforms are not truth machines, they hallucinate. If we adopt a view and approach that positively embraces generative AI (allows use of generative AI to produce an output), we are valuing the product and not the process. Adopting such an approach does not guarantee that students will develop and enhance the higher order thinking skills we should value so much in higher education. Chamorro-Premuzic (2023: 4) reminds us that generative AI could dramatically diminish our intellectual and social curiosity and discourage us from asking questions. We still need to teach our students key processes and knowledge and equip them with the ability and skills to be able to critique, evaluate and judge outputs. As Bearman et al. (2024:1) note, “university graduates should be able to effectively deploy the disciplinary knowledge gained within their degrees to distinguish trustworthy insights from those that are ‘hallucinatory’ and incorrect”.

But generative AI levels the playing field…

I have some sympathy for the argument that generative AI helps “level the playing field” in higher education. Specifically, that it helps students whose first language is not English to be able to access and understand teaching materials and assessments. However, I am still unconvinced this is a sufficient rationale to positively embrace generative AI tools in teaching and assessment without deeper critique and questioning of what may be lost pedagogically. If anything, this framing highlights how poor we are at supporting students whose first language is not English in (UK) universities. It could be argued that embracing generative AI to the extent being advocated allows universities to shirk certain responsibilities. Does a positive embracing of generative AI allow us to gloss over certain areas that we have failed at, as a sector e.g. supporting language proficiency in a meaningful way?

Assessment as Empowerment in a world with generative AI – some thoughts

So, what does assessment as empowerment look like in a world with generative AI? Is it even possible? I hope so. The following provides a few thoughts on where we might focus the discussion to achieving assessments that empower our students to continue to be arbiters of truth whilst developing a range of transferable and empowering skills:

  1. Value + programme level assessment: I suggest, we need to start this conversation from the perspective of value – what is the value of assessment – and then have joint, but critical, conversations at programme level. I am concerned about siloed responses to generative AI without wider department and school level conversations. For some departments and schools, it might be essential to embrace generative AI tools across teaching and in assessment (e.g. in computer science) but for others (e.g. law, my discipline), it is, I argue, essential that we now have programme level conversations about the value of assessment and what skills, literacies and practices we want our students to take away from their degree experience. Should our students decide to go into legal practice, what skills would society wish our future lawyers to have? They will still need to be able to write well, formulate an argument, show attention to detail, detect accuracy and truth in text sources, orally convince, persuade, and advocate. If you have not formulated your own argument, it will never be as persuasive as one self-generated. Generative AI, by removing the process of formulating an argument, robs one of the opportunities to truly develop persuasive advocacy skills. I do not believe that wider society would wish that legal advice be generated by AI. And even if it was, they would want someone to have the skills and evaluative judgement to know what is correct and not simply an hallucination. There is, to quote Bearman et al. (2024), an urgent need, in this new reality, to develop students’ evaluative judgement. Students need to be able to judge quality work of thy self and others.
  2. Oracy/Oral based assessments/mini-Vivas: One type of assessment I would like to see much more engagement with is oral based assessment. Perhaps, now is the time to embrace that. Voice work, and the ability to present confidently is such an important skill and one that I think we should place a much higher value on. We neglect this form of assessment in higher education in favour of written based assessments. This, I argue, is also problematic in terms of helping all students to build social capital. Within my own law module, my move from a traditional ‘write a case-note law essay’ to an oral case briefing assignment (with a ‘you are a trainee solicitor’ positioning) was based on a desire to ensure all students in my module had the opportunity to develop voice work skills within the module and not just in extra-curricular activities as can often be the case in law schools. I argue that maintaining these activities as ‘extra’ results in only those in more privileged positions being able to take part. Students who work, commute, or care for others are often excluded from these activities. That needs to change. Now might be the chance to do so. For those that argue that this is not possible at scale, I disagree. I have been able to implement the development of voice work skills into a core module of 350+. Yes, it is a challenge but with good module design and a good teaching team it can be achieved. 
  3. In person open book exams: Whilst I appreciate there are valid arguments about the problematic nature of in person exams in terms of stress and anxiety, and that they do not reflect real work practices, do in person open book exams offer a compromise and a way forward? Academic integrity scholars (See Philip Dawson, Cath Ellis) certainly argue that in person exams should be in the mix of programme level assessment. I wonder too if part of the resistance to go back to in person assessment is more about cost than an absolute commitment to accessibility and inclusion. Reasonable adjustments can still apply to in person assessments. They did before the pandemic and can again.
  4. Use more bespoke rubrics: Another response in the short term might be to add more bespoke marking rubrics for assessments. This might mean not adding any weighting for structure, grammar and syntax but adding a much higher weighting for other elements to reward process and engagement with materials, accuracy (on the law for example) and sources that are not merely artificial. I have used a bespoke weighted rubric in my module for the last two years and it works very well. Linked to points made above, there is also a section for evaluative judgement within that rubric. As part of the case briefing assessment, students are asked to provide an evaluative judgement post presentation as they would to their ‘supervising solicitor’ in a legal practice setting.

Takeaway questions

  1. What is the value of learning?
  2. What is the value of assessment?
  3. What skills, literacies and practices do you want your students to take away from your module and through your assessment?
  4. How can you design your assessment to empower your students; to help your students to be authentic, to grow and to develop confidence through doing and being?
  5. Think about process not just product.
  6. What role can evaluative judgment play in your teaching and assessment design?


For me, it is essential that universities and educators continue to approach the question and impact of generative AI in education from a critical standpoint. Let us, I advocate, remain critical. Ask questions. Challenge. What might be lost? Who is driving the AI narrative, and why? Why should we care as educators? What are we complicit in? As I have stated above, always consider the structures within which these shifts are happening. The dominant narrative and retort of “AI is here to stay so get over it” is not enough of a reason to not ask questions of what might be lost here. It is appreciated that universities are businesses too and it might seem neglectful, within a business model, not to approach this debate from the perspective of ‘let us not be left behind’ but we have a collective responsibility to be more critical, to question and to challenge the narrative. Given the rate of change in this area, it is incumbent on us as educators, to ask what might be lost in terms of truth, justice, humanity, and real connection. As Esther Perel reminds us, the AI we should be most concerned with is artificial intimacy – that lack of real connection and authenticity in how we show up in the world because of the negative impacts of technology. In a world of hyper-connectivity, we are often not connected at all. I can, as I am sure others can, attest to the negative impact technology has had on my life in terms of being present and meaningfully connected. At a time in higher education where mental health issues are at an all-time high, and confidence and lack of community and belonging are at a low point post pandemic, is a world with generative AI, I ask, going to have a positive or negative impact on connections, relationships, authenticity, truth, and humanity? We are, as human beings, wired for real connection – and hopefully authenticity as well, with all the flaws and vulnerability that come with that. We are not robots. If we embrace generative AI to the extent that is being encouraged by corporations, influencers and even universities, what might we lose?


Ajjawi, R., et al. (2023). From authentic assessment to authenticity in assessment: Broadening perspectives. Assessment and Evaluation in Higher Education, 1-12. [Online].

Arnold, L. (n.d.). Retrieved from

BBC News, (2023, 27 May) ChatGPT: US lawyer admits using AI for case research. Retrieved from

Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 1–13. Retrieved from

Chamorro-Premuzic, T., (2023) I, Human: AI, Automation, and the Quest to Reclaim What Makes us Unique. Harvard Business Review Press.

Compton, M. (2024, 16 April) Nuancing the discussions around GenAI in HE. Retrieved from 

Cormier, D. (2023) ‘10 Minute Chat on Generative AI’. Retrieved from [See series with Tim Fawns:]

Dawson, P. How to Stop Cheating. Retrieved from

Dawson, P. (2021) Defending Assessment Security in a Digital World. Routledge.

Freeman, J. (2024, February 1). Provide or punish? Students’ views on generative AI in higher education (HEPI Policy Note 51).

Harkin, et al. (2022). Student experiences of assessment and feedback in the National Student Survey: An analysis of student written responses with pedagogical implications. International Journal of Management and Applied Research, 9(2). Retrieved from

McArthur, J. (2018). Assessment for Social Justice: Perspectives and Practices Within Higher Education. Bloomsbury Publishing Plc.

McArthur, J. (2023). Rethinking authentic assessment: Work, well-being, and society. Higher Education, 85(1), 85.

Tai, J., et al. (2023). Assessment for inclusion: Rethinking contemporary strategies in assessment design. Higher Education Research and Development, 42, 483.

Tagged with: , , , ,
Posted in Articles

The Evidence-Informed-Teaching Infographics Project: a novel and engaging way to communicate scholarship

Sue Robbins is Senior Lecturer in English Language and Director of Continuing Professional Development in the School of Media, Arts and Humanities.

Evidence-informed Teaching

Whereas research-informed teaching refers to the different ways in which students are exposed to research content and activity during their time at university, evidence-informed teaching refers to the teaching practices that research has shown will have the greatest impact on student learning.

Evidence-based practice is an approach that focuses practitioner attention on the use of empirical evidence in professional decision-making and action. As teaching practitioners, we draw on a range of sources of teaching knowledge, amassed over time. Evidence-informed teaching involves bringing together research from the Scholarship of Teaching and Learning (SoTL) with context and experience to see what works for us and for our learners.

The way evidence works to inform teaching and learning may not always be straightforward, perhaps because learning is the result of such a huge number of interactions, and research findings may sometimes be difficult to implement because it is unclear how to transfer the skills and expertise of teaching. But being familiar with relevant research evidence can help us think about the methodology we select to underpin the design of a module and help students achieve the learning outcomes, including theories of learning, our choice of materials, and classroom procedures. There’s useful information for Sussex colleagues on Educational Enhancement’s SoTL webpage here.

It is also the case that claims for the efficacy of a particular practice can sometimes be made without a broad enough evidence base, leading to the overgeneralising of concepts and ideas. This flipped learning remains under-theorised infographic synthesises the findings from a scoping review which found that despite the rapid growth in the number of articles about flipped learning, most of them failed to elaborate theoretical perspectives, making an analysis of its efficacy difficult.

The Evidence-informed-teaching Infographics Project

The Evidence-Informed-Teaching Infographics Project is a collaborative scholarship project designed to create a set of infographics which synthesise SoTL research in an attempt to bridge the research-practitioner divide. Sussex colleagues can be part of this cumulative knowledge-building project and contribute to the shared knowledge base of evidence that can positively impact student learning by contributing an infographic summary of a journal article that relates to your own interests or teaching practice.

The project began in the School of Media Arts and Humanities and is now expanding to all Schools. The infographics completed so far have been published on the MAH Scholarship blog, and other publishing opportunities will be available as the project expands, offering you an audience for your scholarship.

Reasons to join in

Evidence-informed faculty can make significant contributions to learning, teaching, assessment, and scholarship in their Schools and institutions. A recent post on the WONKHE blog makes the point that ‘robustly evidence-informed education is fundamental to supporting the development of ethical, sustainable and inclusive pedagogies to support learners.’

The Evidence-Informed-Teaching Infographics Project can play to your interests, wherever they lie. You might take a key journal article to summarise because you’ve noticed something in your own practice or teaching context that you’d like to know a bit more about; or you’ve been doing something for a while and want to see what current research says about it. Or it could be that you begin with the literature and identify something that you’d like to try out in your teaching or use it to adjust something you have been doing to better reflect research findings.

Summarising the research on an aspect of teaching and learning can help you distil your thinking, and sharing the infographic with colleagues to inform their practice is a generous way of passing on knowledge. The activity is manageable in size and can be a good way to find out more about teaching-related research, both through creating your own infographic and by reading those created by colleagues.

Keogh et al. (2024), writing about their own project designed to share health research with the public, comment on the huge potential of infographics to communicate SoTL to various stakeholders as summarised in this infographic:

10 Ways infographics can support scholarship of teaching and learning

1: Visualizing Data
Present data from studies, surveys, or assessments to visually represent trends, patterns, and statistics.

2: Sumarizing Research
Display concise and engaging research summaries to highlight key points and takeaways.

3: Explaining Theories
Breakdown complex pedagogical theories & concepts into visually appealing, understandable elements.

4: Sharing Best Practices
Provide practical tips and best practices based on research findings.

5: Comparing Teaching & Learning
Compare different teaching and learning approaches, outlining the pros and cons of each.

6: Promoting Reflection
Present data on student outcomes or feedback to help instructors assess their teaching and make data-driven improvements.

7: Communicating Professional Development
Provide teachers with concise and memorable takeaways from professional development activities.

8: Disseminating Scholarship
Share on social media platforms and websites to reach broader audiences.

9: Supporting Research Proposals
Use in grant proposals to enhance the readability and visual appeal of the project and expected outcomes.

10: Engaging Students
Integrate into classroom instruction to engage students visually and enhance their understanding of complex topics.

Creating and sharing your infographic

To create the infographics, colleagues in MAH have used the design tool Canva, which offers free access to educators. Canva offers a huge range of editable templates and when you have synthesised the key points of your article and can see how many sections you need, you can pick an appropriate one and copy/paste content into it. It’s worth noting that not all of the Canva templates meet the expected accessibility requirements, but the Educational Enhancement team are happy to advise.

Get in touch

It’s a great project to be involved with. Get in touch with Sue Robbins, Senior Lecturer, Department of Languages, MAH if you are interested –, or with Sarah Watson, Academic Developer – All welcome!


Black, K. (2024). Doing academic careers differently. Retrieved from

Educational Enhancement, University of Sussex (n.d.). Scholarship of Teaching and Learning. Retrieved from

Keogh, B., Nowell, L., Laios, E., McKendrick-Calder, L. Lucas Molitor, W., and Wilbur, K. (2024) ‘Using Infographics to Go Public With SoTL’. Teaching and Learning Inquiry 12 (March). Retrieved from

Robbins, S. (2023). Flipped Learning Remains Under-theorised. Retrieved from

School of Media, Arts and Humanities, University of Sussex (n.d.) Scholarship in Media Arts & Humanities. Retrieved from

Tagged with: ,
Posted in Blog, Uncategorised