Episode 1: Scholarship leave

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hotsed by Prof Wendy Garnham and Dr Heather Taylor. It runs monthly, and each month is centred around a particular theme. The theme of our first episode is ‘scholarship leave’, and we will hear from Sue Robbins (Senior Lecturer in English Language) and Dr René Moolenaar (Senior Lecturer in Strategy) as they discuss the experiences and outputs of their recent scholarship leave. 

Sue Robbins  

Sue Robbins is Senior Lecturer in English Language and Director of Continuing Professional Development in the School of Media, Arts and Humanities.  


René Moolenaar 

René Moolenaar is Senior Lecturer in Strategy at the University of Sussex Business School and Associate Professor at the University of Queensland. 

Recording

Listen to the recording of Episode 1.

Transcript

Wendy Garnham 

Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences and conversations around education at our institution and beyond. Our theme for this episode is scholarship leave, and our guests are René Moolenaar 

 Senior Lecturer in Strategy and Sue Robbins, Senior Lecturer in English Language. My name is Wendy Garnham. I’m professor of psychology and director of student experience for the Central Foundation Year Programmes, and I’m your presenter today. Welcome, everyone.  

Okay, so my first question, which I’m going to direct to you, Sue, is what is scholarship leave and what prompted you to apply?  

Sue Robbins 

Thanks, Wendy, and thanks for inviting me. So at Sussex, colleagues on the Education and Scholarship track who are undertaking scholarship can request a semester of scholarship leave every three years. And I was awarded a period of leave from August last year to January this year, and I used it to complete the manuscript of an e-textbook, and to simultaneously prepare it for publication. So over the last couple of years, I’ve been using some of the time allocated to me for scholarship. We have a 20% allowance built into our contracts to write a textbook for learners of English as an international language, and I used the period to complete the manuscript and it was published in January this year, and it’s called Develop Your English with the United Nations Sustainable Development Goals, or Develop Your English for short.  

So it’s an e text book for upper intermediate to advanced level English language learners pitched at approximately IELTS 6 to 7.5, if that’s something you’re familiar with, so it’s appropriate for undergraduate and postgraduate students. And I published it, with Open Press at University of Sussex, so it’s published under a Creative Commons licence and is free to access and use. The book integrates scholarship and professional practice in that it combines the application of second language acquisition theory to practice with the affordances offered by digital technology. I could say loads more about it but I probably won’t.  

René Moolenaar 

Thank you and thank you for inviting me. So for me, what prompted me to apply was, I’ve been with the School [University of Sussex Buisness School], for about 11 years at the time. So I did my scholarship leave in the autumn of 2022. So I’ve been 11 years there. I’ve also just finished the role of director of student experience, which is a rather challenging role. And I’d also just finished my doctorate in business administration and I thought this is the time to actually apply for scholarship leave and to continue my scholarship, based on the thesis that I wrote for my for my doctorate.  

I thought it would be nice to do the scholarship leave at another institution, to get experience of another University. Of course I could have stayed in England with the University, but decided to go to the other side of the world and managed to get a place at the University Australian Centre For Water and Environmental Biotechnology, which is a bit of a mouthful. And a strange home for somebody who has worked in the Business School, since 2011. So yeah, this was kind of interesting.  

Wendy Garnham 

Really sort of quite diverse experiences, I think, during the notion of scholarship leap period. Could you give us an idea of what you sort of feel you achieved during that period of scholarship leave?  

René Moolenaar 

Yeah, so I took the topic of my thesis, which is all about how placements can support the development of a broader university-industry relationship, which is also how I pitched it to the center at UQ. For them that particular topic was really important, they do a lot of work together with industry. The framework I had developed to the local situation, and then to present my original findings and the findings from the study that I did there, to the Centre. Which they very much appreciated. They liked it a lot, given their interest in this particular field and they are actually in the process of applying the framework that I have developed.  

Wendy Garnham 

It sounds as though it was a very beneficial period of leave but to both of you, I just want to pose the question. Were there any unexpected outcomes from your period of scholarship leave? It was extremely beneficial,  

Sue Robbins 

Unlike Renee, I wasn’t in the other side of the world, I was sitting at home in my second bedroom working away at my computer. There were many, many different aspects to the management of this project, but what was really great was to have a single focus because a lot of our working lives, we manage so many different things on a daily basis that being allowed to just think about one thing for six months was a huge benefit, and I think it would have been difficult to complete the project without the scholarship leave. An unexpected outcome, is that I made an early decision to turn all of the written tasks that I’d made into interactive ones because I was working on a digital platform that allowed for interactivity, And not exactly inadvertently because it had been at the back of my mind, but I did give myself a substantial amount of extra work making that decision.  

Wendy Garnham 

It’s definitely a gold star for my active learning interest.  

Sue Robbins 

Yeah. The book is fully interactive. I used the content creation tool H5P, so all of the tasks are interactive and all of them have instant feedback so that students can self-assess at every stage how well they’ve done with every task. So it was definitely worth doing, but it hadn’t formed part of the original plan.  

Wendy Garnham 

How easy was it to use H5P? 

Sue Robbins  

It’s fairly straightforward. I’ve used it before, so I was very familiar with it. I’d created a short course for the Department of Language Studies a couple of years ago, a short online course in which I’d used it. I think of all the content creation tools, HP5 is the easiest to get to grips with. There is a learning curve, but it doesn’t stress you too much to understand how the tool works.  

The only thing really is that the back end of the tool looks absolutely different to the front end. So you have to get used to that. Yeah. But I’ve tried a range of content authoring tools and for me this one is the easiest to get to grips with.  

Wendy Garnham 

René. How about you? Any unexpected outcomes?  

René Moolenaar 

Yeah. The very pleasant and unexpected outcome was that I was offered an adjunct associate professorship of University of Queensland (UQ), which was very nice. I had not anticipated that clearly, but on the penultimate day of my stay there, I had a meeting with the director of the Centre and they suggested that I should become an adjunct. Which after some discussion and sort of trying to understand a bit more what that would actually mean, how much work do I need to do and such things, we agreed that I should do that, went through the formal process and was offered that, I think about a couple of months later.  

Wendy Garnham 

Thinking ahead in terms of the impact of your scholarship leave, what impact has your scholarship leave had on the academic community or what impact do you hope it will have?  

Sue Robbins 

Yes, so publishing with Open Press here at University of Sussex, rather than with one of  

the big English language teaching publishers, which I’ve done in the past, it gave me much more freedom in the design of the material. And Develop Your English is innovative in the field of English language teaching in that the content incorporates global perspectives into the into the language learning process, because it focuses on international themes which I’ve organised around each of the, the United Nations Sustainable Development Goals.  

It gave me an opportunity to think about how language education can play an essential role in sustainability literacy because potentially the market for the book is enormous, potentially, and sustainability literacy can be developed through the use of the textbook. So that’s something that could potentially shape the field in a way that it hasn’t been shaped before. And in terms of international impact, it’s also the case that English language textbooks, the cost of them is prohibitive in many low-income parts of the world, which leaves very many learners without access to good quality material.  

So using this digital platform offered via, Open Press, there’s the potential of reaching those learners that isn’t there currently, and in order to make that more possible the book can now be offered in a range of formats, so it is essentially an e-textbook, but it can also be accessed in e-Pub format which means it can be used offline, and this is important because many learners in many parts of the world, if we think about, digital poverty, don’t have access either to a good internet connection or to, hardware. So it can be used as an e-textbook, it can be used offline as an e-Pub, and it can also be downloaded as a PDF which means places where, where people really don’t have good access to the things they need. Assuming one time access to a printer, it can be downloaded and printed out and used as a hard copy. Obviously, that loses the interactivity, but there’s a full answer key in that version. So thinking about sustainability literacy, which is important, and thinking about, access and equity, it has the potential to address both of those things. Yeah, I mean it ticks so many boxes I think that are real sort of hot topics within scholarship at the moment, so I’m sure it’s going to have a really big impact on the community.  

The difficulty will be, promoting it. Open Press is a small press and it’s very new, So it’s how we get people to know that it’s there will be the next challenge to try and overcome.  

Wendy Garnham 

Looking at impact, René, how about you? What impact?  

René Moolenaar 

Yeah. It’s a good question. So the original framework I developed was based on the study I did at Sussex. And whereas I thought it was already fairly complete, going over to a different university on a different continent, different parts of the world, it was interesting through the data collection process and reflecting and challenging my own framework to find out that it wasn’t Although good parts sort of survived of it, clearly there were elements of it that needed updating. Partly specific to the local environment but also clearly with a much broader application, as well.  

So I ended up with an improved version of the of the framework, that is now being applied, which is kind of interesting. It’s interesting to pick something up that you developed in one situation to then take it to another situation. In this case we went from a Business School situation to, in fact, a center that was in the Information Technology and Chemistry department. It’s an entirely different situation.  

And data and then to see how well your your framework can be applied to a very different situation. So that was an interesting process and it led to an improved version of the of the framework.  

Wendy Garnham 

I mean it sounds from both of your perspectives as diverse as your experience was that you’ve really gained a lot from it academically and professionally. How about personally? Would you like to sort of say something like what was the experience of being on scholarship leave?  

Sue Robbins 

It’s difficult to separate the professional from the personal to be honest. Yeah. Because they interact so strongly, don’t they. In terms of scholarship. But it was really rewarding to have that time to just I think I said early on to just focus on one thing. Yeah. Which isn’t something it’s not something we normally have space to do. So I was able to complete something that I’d been working on for several years already and I think without the leave it would have taken me several more to be honest. So it was it was rewarding, I mean, and it was really interesting. And also having access to Open Press and to the Open Press team was a joy because they share a lot of my, I’m going to say passion, it’s an awful word isn’t it, but they share a lot of my passion for open practices generally, and so it was really nice to spend time talking through what we were doing with people who had a similar outlook.  

Wendy Garnham 

Did you find it difficult managing your time?  

Sue Robbins 

No. Well, I say no in the sense that I I tended to overwork during that period because I knew it was a finite amount of time and I knew I wanted to get the project done, I did go at it. Yeah. And, and so managing it in the sense of, you know, don’t wear yourself out, was perhaps the challenge. Yeah.  

Wendy Garnham

How did you find that, René? 

René Moolenaar 

I agree with Sue that the two interact very much, the personal and the professional if you like. But from a time perspective, given that I was at UQ for eleven weeks, I had to (and early on they had already said, René you’re going to present in the penultimate week), that gave a real deadline by which I need to have completed all the additional research, reflection and evaluate the impact on my framework and then of course to present. So, there was not really an option to do it a week later or something, this had to be done. It was actually quite, I’m sort of part time, it’s important to say that I’m part time here at the University, so I’m still in industry.  

And given that I sort of went from industry into academia fairly late compared to younger colleagues, my experience of other Universities was limited. So to actually spend time for eleven weeks long at another University’s properly embedded in is just, it’s just very interesting to learn how they work, the challenges they have compared to the challenges that we have, the successes etc. Of course you’re extending your network of contacts which is amazing. But I think almost above all, being awarded, applying for scholarship leave is not easy because there are limitations of course, the number of people in a department that can go on scholarship leave. So to be awarded it is kind of special.  

And it gave me a feeling of an element of reward. I’m being appreciated by being awarded the scholarship leave. Because at the end of the day, you know, we’re able to focus on a particular topic, scholarship topic, with continuing sort of pay and that’s clearly very nice. So yeah, this it was it was kind of amazing, it lifted my spirits, I came back almost rejuvenated and there’s renewed energy to take on, you know, the role that we have, which is a challenging role to combine often a significant teaching workload with a scholarship workload.  

Often the scholarship piece gets compromised because of the teaching element that we need to do and to be given time to focus on the scholarship element I think is fantastic. I think it’s needed. I think it’s absolutely needed and fundamental to our own development and the development of our colleagues because we’re clearly spreading, if like, the word, what we are doing. But I think it’s also very nice to for this idea of there’s an element of rewards here.  

Wendy Garnham 

I guess that sort of links into our next question, which is about advice that you would give to anybody contemplating taking a period of scholarship leave.  

Sue Robbins 

I think at any given moment most of us could come up with a range of scholarship projects that we’re either already tackling or are really interested in or would like to pursue, and I suppose given that as René said, this scholarship leave is a gift, and it’s one that, you know, is precious and it might not come round very often, it’s worth thinking about is there something that actually you really need that time to complete, or you’d really like to be able to complete or carry something out in that time that you might not be able to do without it. Because we do have, and it isn’t enough, but we do have that twenty percent built into our contracts where we can keep chugging along with lots of stuff. So is there something that you really want and that you really need that time for, and can you really do it in that amount of time? How much can you achieve in 6 months is a big question to ask yourself. But do it because undertaking and undertaking and sharing scholarship is really important work.  

René Moolenaar 

Adding definitely to what Sue was saying, for me the piece of advice would be to prepare for it. It took me something like a year from thinking I’m going to apply to actually go on scholarship leave. And of course perhaps in my case it was maybe slightly different because I wanted to go to another university. I was absolutely set on that, and that that’s not easy. And I thought it would be easy. I thought I’ll just talk to some colleagues from around the world and I’m sure they can find me a desk somewhere. And absolutely not. Colleagues that really have very good relationships with other universities. I’ve tried but no, I think it would have been different if I was a professor with a long list of publications under my belt and a name in a certain field. It probably would have been different. But that was not me. And so to find an open door somebody, somewhere was hard, was difficult. Took me quite a few months to find it. So yeah, definitely about prepare prepare prepare. So you may not even get it with your first application, because of maybe a limited number of colleagues that can go in that particular term or that particular year on scholarship leave. So it may you may need to apply twice to get it.  

Wendy Garnham 

As we all know, the importance of feed-forward is forever at the front of our minds as good good, scholarship individuals. So I guess 1 thing it would be good to leave our listeners with is some suggestion for further reading or a resource that might be of use. So if you could name one resource or article or any sort of further study, what would you suggest?  

Sue Robbins 

So in terms of educational for sustainable development or, sustainable literacy if you like, Christiane Lütge has written a lot in this area in terms of relating that to language teaching, and she’s really worth following up and having a look at. But I can’t not say, please do have a look at my book. Yeah. So do Develop your English with the United Nations Sustainable Development Goals. You might want to share it with your international students. You might want to see how the goals can be integrated into the learning process, not necessarily just in language but in any discipline. You might be interested to see how H5P works in practice, or you might just be interested in having a look at our own Open Press site which includes lots of, other books including some edited by Wendy. So I’ll just leave it there.  

René Moolenaar  

I think on the topic of scholarship leave or study leave or sabbatical, there are a number of journal articles published, I even found one that went into the history of it. And it going back to explaining the name sabbatical and when Harvard in, I think it was 1880 or something started this, that after six years of work you could then have one year of sabbatical. Anyway, so a number of articles on the topic. The book that at least one found two books on the topic, I haven’t read either of them, but one struck me with an interesting title. It says The Academic Sabbatical A Voyage of Discovery.  

It was published in 2022 and I thought it is very much a voyage of discovery. It gives you an opportunity to go on a journey and to discover and to continue with scholarship or to discover another university or to discover certain interests that you may have that you want to develop going forwards in the field of scholarship. So I think that that’s an interesting book that I actually might well buy and or suggest a library to acquire, that might be an interesting read.  

Wendy Garnham 

All of these including a link I believe to Sue’s book will be in the episode description for anybody who would like to follow-up on those. That brings us to the end of our podcast on scholarship leave. So I would like to thank our guests, René Moolenaar, Senior Lecturer in Strategy. Thank you. And Sue Robbins, Senior Lecturer in English Language. Thank you.  

And thank you for listening. This has been the Learning Matters podcast from the University of Sussex, created by Sarah Watson and Wendy Garnham and produced by Simon Overton. For more episodes as well as articles, blogs, case studies and infographics, please visit Learning Matters 

References

Sue Robbins 

Robbins, S. (2024) Develop Your English: with the United Nations Sustainable Development Goals. Open Press at University of Sussex. Available at: https://openpress.sussex.ac.uk/developyourenglish/   

Lütge, C., Merse, T., and Rauschert, P. (Eds.) (2023) Global Citizenship in Foreign Language Education: Concepts, Practices, Connections. Routledge. Available at: https://www.taylorfrancis.com/books/oa-edit/10.4324/9781003183839/global-citizenship-foreign-language-education-christiane-l%C3%BCtge-thorsten-merse-petra-rauschert  

René Moolenaar 

Gardner, S.K. (2021) ‘Faculty learning and professional growth in the sabbatical leave’, Innovative Higher Education, 47(3), pp. 435–451. doi:10.1007/s10755-021-09584-4.   

Macfarlane, B. (2022) ‘The academic sabbatical as a symbol of change in higher education: From rest and recuperation to hyper-performativity’, Journal of Higher Education Policy and Management, 45(3), pp. 335–348.  doi:10.1080/1360080x.2022.2140888 .  

Sibbald, T. and Handford, V. (2022) The academic sabbatical: A voyage of discovery. Ottawa, Ontario: University of Ottawa Press.  

Zahorski, K.J. (1994) The sabbatical mentor: A practical guide to successful sabbaticals. Bolton, MA: Anker Pub. Co. 

Tagged with: , , , , , ,
Posted in Podcast

Using video feedback to engage students with marking criteria

Clare Harris, Senior Teaching Fellow in Creativity and Design, and Alexandre Rodrigues, Lecturer in Product Design, explain how they implemented the use of screen recordings to enhance their feedback.

Clare has a background in design and extensive experience working in the creative industries. Her teaching primarily focuses on creative thinking, processes, and practices. Since 2016, she has been a part of the Product Design Team at the University of Sussex, where she teaches Drawing for Design, Experience Prototyping, Interaction Methods, and Toy and Game Design. Clare is also the module convenor for the Final Year Projects. Additionally, she has taught Design at the University of Southampton (Winchester School of Art), Brighton University, and the Open University. Clare lives in Hastings with her partner, her cat, Pywacket, and her dog, Moss. Her hobbies include pottery and border Morris dancing, and is proud to be a member of the Hastings Punk Choir.

Alexandre is a Lecturer in Product Design in the School of Engineering and Informatics, Department of Engineering and Design. He received his PhD from Nottingham Trent University in 2019. He is a Sussex Education Award winner. His research thesis in sustainable production and consumption contributes to understanding how the social facet of socio-technical transitions can help replace the car culture status quo and provide opportunities for nudge policy action using Cultural Theory and the Theory of Interpersonal Behaviour. His current educational interests are in using Virtual and Augmented Reality as tools for teaching and learning in Product Design. Alexandre is a SCITECH C-REC committee member.

What we did

We started to use video feedback for Final Year Project supervision with our Product Design students. When marking we create a screen recording as we go through the student’s Canvas submission, giving comments verbally and directing students to different areas of their work on the screen. Within Canvas you can click on the attached marking rubric which appears next to the assignment. You can then use this to structure your feedback, taking it section by section as your marking so the students are basically seeing what you see when you’re marking.

We really talk them through the process as we mark, in terms of what they did well, where they didn’t do quite so well. As you are screen recording you can also go back through the module’s Canvas site remind them of, for example, one of the Padlet exercises they’ve uploaded to week four. It just means that the feedback your giving becomes a lot clearer a lot quicker.

When compared to written feedback, video feedback is more nuanced to the individual submission and that student’s needs. It also allows for a greater degree of personalisation.

Why we did it

Initially, about three years ago, we shared a final year student who was neurodiverse and we had to try to be really explicit about what we expected of them and very clear, but obviously keeping a friendly tone. So we decided to experiment with video feedback for that and realised the benefits quite quickly.

It felt like you could say an awful lot more and were able to say it in a very nice, encouraging way. We could actually pinpoint bits of the submission that needed improvement, conveying a lot more information succinctly.

Moreover, if English is not your first language it can sometimes be a struggle to write the right feedback with the correct tone. It can be worrying thinking that your writing might be misinterpreted and perceived as being more negative than was intended. There is great benefit to being able to hear an encouraging or more positive tone.

Challenges

Time is needed to have a look around and experiment with different tools. You want something that is going to be easy to use and to edit if needed. We use ScreenFlow and PowerPoint, but there are so many different tools available. Your School’s Learning Technologist will be able to help.

One key element to have in place is a detailed rubric/marking scheme beforehand. This will allow you to stay focused while you record and helps to maintains consistency across your cohort. Your School’s Academic Developer will be able to support you in creating or updating your marking schemes.

Impact and student feedback

We’ve had some really positive feedback from students. One thing that has happened is that students have responded to our feedback, they’ve actually said thank you for the feedback. Whereas normally your feedback goes out there and then that’s kind of it, it’s very much one way. Now there’s a dialogue between us and the students, they’re giving us feedback on feedback!

We’ve also noticed there’s less confusion around why students got a particular grade, or what they haven’t quite got right. As we are presenting their feedback alongside the criteria students seems to have a little better understanding of it. Now students are reading and engaging with the marking criteria ahead of their assessments.

Future plans

We intend to continue using video feedback across the different modules that we teach and to extend this to additional modules and assessment modes.

Top tips

  • Make sure that you have a rubric/marking scheme in place because that’s the thing that’s going to keep everything consistent, focused and fair.
  • Personalize your feedback, address the student by name and reference specific aspects of their work to show that you’ve engaged with their submission.
  • Use whatever software works best for you.
  • Keep it manageable, around 5-7 minutes, to maintain the student’s attention.
  • Speak calmly and keep it positive, the tone is really important.

Resources

Tagged with: , ,
Posted in Case Studies

Sussex Education Festival 2024 Programme

Please join us for the second Sussex Education Festival, an event for anyone involved in delivering education at Sussex. The event will be held over two days. You can attend the Festival in-person (9.30-4pm on 10 July) in the Woodland Rooms at the Student Centre and/or online (10-3pm on 11 July). Please see the programme for further information. 

The Festival will consist of a number of different session types, including panel discussions and interactive workshops, which are focused around themes such as alternative assessments, student engagement and wellbeing, generative AI and environmental sustainability. We look forward to seeing you there to celebrate all the amazing work that goes into teaching, learning and assessment here at Sussex! Please sign up via our registration form

Posted in Blog

Guessing and Gender Bias in Multiple-Choice Quizzes with Negative Marking

Dr Matteo Madotto, Lecturer in Economics, University of Sussex Business School.[1]

Introduction

When designing multiple-choice quizzes (MCQs), an important decision to make is whether or not to apply negative marking to incorrect answers. The main rationale for penalizing wrong answers is to discourage “guessing”, i.e. situations where students are very uncertain about the correct alternative and decide to answer more or less at random in the hope of getting it right by chance. Indeed, without negative marking rational students would have an incentive to attempt all questions, even those where they have absolutely no clue about the correct answer, since they would always get a positive score in expectation (e.g. Budescu and Bar-Hillel, 1993; Prieto and Delgado, 1999; Bereby-Meyer et al., 2002; Betts et al., 2009; Lesage et al., 2013; Akyol et al., 2016). On the other hand, one of the main concerns with negative marking is that it might end up being discriminatory against female students. Indeed, evidence suggests that females tend to be more risk averse than males, which, for an equivalent level of knowledge, may lead them to answer less questions and be unfairly disadvantaged when negative marking is applied (e.g. Burton, 2005; Espinosa and Gardeazabal, 2010; Lesage et al., 2013; Akyol et al., 2016).

In this short article, I present the results of five MCQs with negative marking taken by 900 undergraduate students at the University of Sussex Business School between 2021 and 2023, and analyze how these tests performed along the two main dimensions highlighted above, i.e. guessing and gender bias.

All quizzes contained 20 open-book questions, each of which had 4 possible alternatives with only one correct answer. The order of both questions and answers was randomized to reduce collusion among students. Each correct answer was worth 5 marks, each unanswered question was worth 0 marks, while each incorrect answer was worth -2 marks. The overall score was computed as the sum of the marks, with a minimum floor of 0. Students were made aware beforehand of this marking scheme. They, however, were given no strategic advice on when they should attempt a question, so as not to bias their choices in either direction. The negative marking of -2 ensured that a student with absolutely no clue about the correct answer to a question, i.e. a student who assigned an equal probability to each of the alternatives, would get an expected mark of approximately 0 (specifically -0.25) by answering the question, as is typically considered appropriate in the case of MCQs with negative marking (e.g. Budescu and Bar-Hillel, 1993; Prieto and Delgado, 1999; Bereby-Meyer et al., 2002; Lesage et al., 2013).

Guessing and gender bias

To determine whether or not random guessing remains an issue even when negative marking is in place, we look at the average ratio between the percentages of students who selected the most popular and the second most popular incorrect alternative per question, and that between the most and the least popular incorrect alternative per question. If students assigned an equal probability to all alternatives and answered completely at random, then both these ratios would be approximately equal to 1. As can be seen in Table 1, however, this does not seem to be the case at all for both males and females, regardless of the level of difficulty of the test.[2] On the contrary, in most questions there are both a popular incorrect alternative, which appears plausible to a relatively large number of students, and a very unpopular one, which is chosen by few of them. Specifically, from Table 1 we see that in four of the five tests the most popular incorrect alternative per question is chosen by a percentage of students which on average is about 4 to 10 times larger than that of the second most popular one, and 6 to 16 times larger than that of the least popular incorrect alternative.[3] Only in one quiz the two ratios are substantially lower (see more on this below). Of course, here it is not possible to determine how much of this is due to the presence of the negative marking itself; however, it appears that one of the main apprehensions surrounding MCQs, i.e. guessing by students, is rather limited when such marking scheme is implemented. Those students who decide to answer and choose the wrong alternative seem to do it out of incorrect knowledge rather than no knowledge at all.

Test numberNumber of malesNumber of femalesAverage scoreScore standard deviationAverage ratio between % of most popular and second most popular incorrect answers – MalesAverage ratio between % of most popular and second most popular incorrect answers – FemalesAverage ratio between % of most and least popular incorrect answers – MalesAverage ratio between % of most and least popular incorrect answers – Females
1189936322.56.37.316.414.5
2179773923.84.44.47.510.2
375275618.210.44.810.48.6
498355323.25.53.78.25.7
593345621.32.41.96.03.2
Table 1

Turning to the second main question of the article, we analyze whether MCQs with negative marking are discriminatory against females. Data on the gender of individual students were not available, therefore we used students’ names as a proxy for their gender. Summary statistics and two-tailed t-tests for total scores are shown in Table 2, while those for the number of unanswered questions are in Table 3. In four of the five quizzes, both scores and unanswered questions of females were not significantly different from those of males at any conventional significance level. In one quiz, however, females performed worse than males at a 1% significance level and left a larger number of questions unanswered at a 10% significance level.

Test numberMale average scoreFemale average scoreMale score standard deviationFemale score standard deviationtp-value
162.762.721.025.50.0040.997
238.441.024.421.8-0.8250.411
354.958.517.221.2-0.7820.439
453.652.722.226.40.1800.858
559.547.520.022.52.7340.009
Table 2
Test numberMale average of non-responsesFemale average of non-responsesMale standard deviation of non-responsesFemale standard deviation of non-responsestp-value
10.81.11.62.3-1.0230.308
21.92.22.92.3-0.7020.483
30.40.61.01.1-0.8540.398
41.11.51.71.8-1.0580.295
51.53.02.84.2-1.8950.065
Table 3

A possible trade-off

As Tables 2 and 3 show, one of the most common concerns about MCQs with negative marking, i.e. that they may discriminate against female students, does not appear substantiated in most of our cases. However, comparing the results in these two tables with those in Table 1, we see that the only quiz in which females performed significantly worse than males and left a larger number of unanswered questions (namely test 5) is exactly the one in which students seemed more uncertain about the correct answers, as measured by the relatively low values of the two ratios in Table 1. It may be therefore the case that gender bias occurs precisely in those situations where random guessing is more likely and hence negative marking would be more useful. This may be because differences in the risk attitude of students start playing a role exactly when the latter are sufficiently uncertain about the correct answer, i.e. when they assign similar probabilities to all alternatives.

To avoid this trade-off, it may be sensible to design questions such that at least one of the alternatives would appear highly unlikely to those students who possess a minimum level of knowledge, allowing them to assign higher probabilities to the remaining options. In this way, negative marking would discourage random guessing by those students with a very low knowledge level, without excessively reducing the incentives to answer of the more knowledgeable students, regardless of their risk attitude.

References

Akyol, S. P., Key, J. and Krishna, K. (2016) “Hit or miss? Test taking behavior in multiple choice exams” NBER Working Paper 22401.

Bereby-Meyer, Y., Meyer, J. and Flascher, O. M. (2002) “Prospect theory analysis of guessing in multiple choice tests” Journal of Behavioral Decision Making, 15(4), 313-327.

Betts, L. R., Elder, T. J., Hartley, J. and Trueman, M. (2009) “Does correction for guessing reduce students’ performance on multiple-choice examinations? Yes? No? Sometimes?” Assessment & Evaluation in Higher Education, 34(1), 1-15.

Budescu, D. and Bar-Hillel, M. (1993) “To guess or not to guess: a decision-theoretic view of formula scoring” Journal of Educational Measurement, 30(4), 277-291.

Burton, R. F. (2005) “Multiple-choice and true/false tests: myths and misapprehensions” Assessment & Evaluation in Higher Education, 30(1), 65-72.

Espinosa, M. P. and Gardeazabal, J. (2010) “Optimal correction for guessing in multiple-choice tests” Journal of Mathematical Psychology, 54(5), 415-425.

Lesage, E., Valcke, M. and Sabbe, E. (2013), “Scoring methods for multiple choice assessment in higher education – Is it still a matter of number right scoring or negative marking?” Studies in Educational Evaluation, 39(3), 188-193.

Prieto, G. and Delgado, A. R. (1999) “The effect of instructions on multiple-choice test scores” European Journal of Psychological Assessment, 15(2), 143-150.


[1] I would like to thank Ana Carolina Tereza Ramos de Oliveira dos Santos for her excellent work as research assistant.

[2] It is often hard to properly calibrate the level of difficulty of an MCQ, especially when this is administered for the first time, and indeed one of the test turned out very difficult for students. Of course, similar issues can occur with or without negative marking. The presence of the latter, however, tends to amplify the impact of miscalibration to a certain extent.

[3] The average ratios in Table 1 can actually be thought as lower bounds, since these are computed excluding those questions for which the denominator of the ratio would have involved an answer not chosen by any student.

Tagged with: , , , ,
Posted in Articles

Measuring educational gain through Assurance of Learning (AoL)

Farai is Associate Dean (Education & Students) in the University of Sussex Business School and Professor in the Department of Economics. She has several years’ higher education teaching experience in statistics, development economics and other applied economics topics. She has also worked for several international development agencies in the past.

There is increased emphasis in the UK higher education sector on measuring the impact of the education provided by universities to students on acquiring the knowledge, skills, and other competencies outlined in degree programmes. In 2023 the Office for Students asked education providers to present what ‘educational gains’ they intend their students to achieve, what support they offer students to achieve them, and what evidence they have that students are succeeding in achieving these. While the Teaching Excellence Framework (TEF) focuses on measures of continuation, completion and progression, educational gain also encompasses areas such as knowledge, skills, personal development and work readiness. However, the definition of educational gain is quite open-ended and leaves room for providers to conceptualize and articulate their interpretation of it in practice.

Accreditation is an important process for Business Schools globally. As part of AACSB (Association to Advance Collegiate Schools of Business) accreditation Business Schools must demonstrate they have a systematic process of Assurance of Learning (AoL). Assurance of learning (AoL) is about demonstrating, through assessment processes, that students achieve learning expectations for the programs in which they participate. It involves the use of robust, systematic and sustainable assessment processes designed to improve the learning of students. It is about process improvement and can also be the key driver of curriculum change.

Curriculum alignment

One of the early steps in the Assurance of Learning (AoL) process is curriculum alignment, where school learning/competency goals and course[1] learning objectives/outcomes are mapped on the curriculum. The focus here is on the common learning experience of students enrolled on the course. Curriculum alignment is important as the mission of the school (that is, what the school does) must align with the education the school offers. Hence, it is crucial to ensure that the school’s competency goals (e.g., sustainability, responsible leadership, collaboration) are reflected in the curriculum in a manner that allows students to develop that skill, knowledge, or attitude.

Step 1: Conceptual

An important initial step of the AoL process is articulating the overall competency goals of the school. For example, the University of Sussex Business School (USBS) has five competency goals which stem from its mission statement, namely:

  1. Demonstrate appropriate discipline-specific knowledge using relevant methods and technologies
  2. Work effectively in a team
  3. Be responsible students and citizens
  4. Communicate effectively with different audiences
  5. Demonstrate the ability to work independently and apply critical thinking skills to develop innovative solutions

We expect the education we offer our students to enable them to acquire the above competencies by the time they graduate.

Step 2: Operational statements

The USBS offers many courses (i.e., degree programmes). Each course has its own learning outcomes/objectives. As part of AoL course learning objectives are mapped to the school competencies outlined in step 1 making explicit the curriculum alignment, that is, the relationship between the Business School’s overall goals and objectives and what is offered to students at the course level. A course typically has additional bespoke learning objectives that may not necessarily map one-to-one with school competencies which serve to differentiate the differences across courses.  

Step 3: 1st Measurement (opening the loop)

The next step of AoL is measuring whether we have delivered on our course learning objectives, that is, have our students acquired the course/degree level competencies we promised to deliver? Thus, an important aspect of AoL is to have a benchmark of what constitutes a course cohort having satisfactorily met the learning outcomes. For example, 80% of students achieving a pass mark on a course learning outcome at first attempt can be considered satisfactory performance. ‘Exceeding’ and ‘meeting’ the learning outcome are also differentiated. The former refers to a distinction mark while the latter refers to any pass mark below distinction. The benchmark for satisfactory performance does not have to be generic across courses. What matters is the rationale behind the benchmark.

Direct measures

Assessments are a key indicator of the extent to which students have learned what we taught them. To determine course cohort performance on a course learning outcome select core module assessments are mapped to each course learning objective. In some cases, there may be capstone assessments offered at the course level which align with specific course learning outcomes. In addition, either formative or summative assessments could be used to measure learning outcome performance, or a combination of the two. Where there is no suitable core module, optional modules that are representative of the course cohort can be used instead.

In practice a whole assessment component can be a suitable direct measure.  For example, an essay assessment can be a suitable measure for the learning outcome “Demonstrate an advanced understanding of management information systems using a range of concepts, theories and technologies”. However, in other cases, only certain aspects of the essay may be relevant to ascribe to a particular learning outcome. In this case, the marker must distinguish within the marking what parts of the assessment relate to the course learning outcome. For example, a dissertation may be a suitable assessment to measure the following learning outcome: “Understand the role of ethics in the evolution of innovation, change and contemporary issues in management”. However, it may not be appropriate to ascribe the whole dissertation mark to this learning outcome. Rather, only a component of it (e.g., 10 marks out of 100) may be related to that learning outcome. This implies the marking rubric must capture that component. Moreover, a student failing the overall dissertation may have not failed the course learning outcome as they may have achieved a pass mark for the component associated with the course learning outcome, and vice versa. It is important to understand that the performance here is in relation to the course learning outcome which may be attached to a whole module, an assessment component, or an assessment sub-component. Also, a big part of using assessments is aggregating the data at course level for assessments shared across different courses. This is to ensure cohort performance is captured separately for each course.

Indirect measures

The extent to which course learning outcomes have been met can also be measured using qualitative metrics (e.g., employer surveys, NSS scores, module evaluation questionnaires, alumni surveys, advisory board focus group discussions, etc.), referred to as indirect measures. Indirect measures can be a useful complement to direct measures and if used appropriately can provide valuable explanations to the findings from the quantitative results. Caution would need to be taken to ensure any sampling procedure for collecting qualitative data yields a representative sample.

This round of data collection is referred to as “opening the loop” in AoL language.

Step 4: Using the data to improve student learning

Where the target has not been met, it can be investigated why this is the case and what the right ‘intervention’ to achieve improvement is. There are two types of improvement: (i) process improvement (e.g., improving how to teach/assess, when to teach/assess; ‘systems’, etc.), and (ii) curriculum improvement (i.e., improving the syllabus, content, skills, knowledge, and competencies taught).

AoL is about being improvements oriented rather than compliance oriented. It is not a data collection project but a data usage project. That is, how can the data we have collected be used to help improve the learning of students. How can the data be acted on?

If students have not developed the competencies faculty thought they would have from the curriculum taught, faculty can reflect and develop learning experiences that can be used to improve student performance on the learning goals. It may take some trial and error, but over time students can improve their skills in specific competencies because of thoughtful, data-driven curriculum development and management. The objective of AoL is to assure student learning.

Step 5: 2nd Measurement: closing the loop

This step involves a second data collection exercise akin to that described in step 3, e.g., one year later. From this second round of data collection, by looking at the outcomes it can be determined whether the interventions in step 4 achieved the desired effect. That way the loop will have been closed by having two data points (measurement before and after intervention). “Closing the loop” does not imply improvement in performance has been achieved. It is simply about having two data points to compare.

In the case where no intervention was required when the loop was opened, we can assess whether this is still the case at the point of closing the loop. This process repeats over time in that closing the loop is akin to opening the loop for the next period (see illustration below).

Assurance of Learning (AoL) process

Continuous improvement process

Using the approach discussed above, we can continuously monitor our courses and the extent to which course learning outcomes are being met and how effective interventions are. We can also learn what we are doing ‘well’ and find out how and why this is. The data can also help us identify  whether there is a need to review learning outcomes to make them more ‘challenging’. For example, if we are continuously exceeding targets on the same learning outcome, we may need to adjust the benchmark. Or, we may decide to revise the learning outcome to offer a new competency to our students given we now have consistent satisfactory performance on the existing competency.

The AoL process is a continuous improvement process. The gap between opening a loop and closing a loop can be anything up to a six year gap. In the USBS we have adopted a one year gap for now, until we gain enough traction to widen the gap. Eventually this becomes a ‘self-driving’ process, enabling us to manage our curriculum effectively. The objective is to have a culture around what our students are learning, how we improve that learning, and how we work together to make that happen.

Effective AoL should lead to an improvement in student learning and raise the quality of graduates. AoL can also go a long way in addressing the intensifying pressure to develop data-driven responses to public demands for justification of investment in higher education.

Lastly, it is important to note that most educators already undertake the AoL exercise as part of their responsibilities in teaching and assessing students, making improvements based on past performance, and reflecting on current practice to inform future teaching and assessment. The AoL framework enables these processes to be captured in a more systematic, robust and sustainable manner. It also provides a holistic view at the course level and facilitates continuous curriculum and process management improvement.


[1] In this article, course refers to a degree programme.

Tagged with: , ,
Posted in Blog

A student led session on reviving curiosity and student engagement

Ismah Irsalina Binti Irwandy (left) and Liv Camacho Wejbrandt (right)

First year student reps, Ismah Irsalina Binti Irwandy and Liv Camacho Wejbrandt, co-developed and delivered a workshop for lecturers at the Engineering and Informatics Teaching and Learning Away Day. Here they explain the part they played in developing the session, insights from their survey of students and staff on engaging teaching, and what they learned from delivering the session.  

Ismah and Liv are happy to share the resources used in the session and to advise staff and students from other schools on developing their own activities. 

What we did

In November 2023 we responded to a call from our school’s Director for Teaching and Learning (DTL), Dr Luis Ponce Cuspinero, asking for student representatives to develop and deliver a session on reviving curiosity and student engagement for the Engineering and Informatics School teaching and learning away day in January 2024.  

How we did it

Luis started by sharing the aims of the 50-minute session, which were to help lecturers understand the kinds of approaches to teaching and learning Engineering and Informatics students found most engaging and to encourage them to think about how they might better encourage their students’ curiosity and provide even more engaging teaching sessions. Ismah, who was the first to sign up, primarily worked with Luis on developing the questions and activities for the session. Liv joined a little later and led more on developing the presentation and delivery of the session. Planning meetings with Luis ran for between 15 to 30 minutes each week, over around 5 weeks. 

The first step was to develop a survey for Engineering and Informatics students to find out the kinds of teaching they find most engaging. Ismah brainstormed a long list of questions and, with Luis’ help, whittled them down to five, which were then put onto Poll Everywhere and sent to all students via the School Office.  (The questions are provided below). 

Our approach to designing the session was to make it interactive and engaging and to demonstrate how we like to be taught! The final session comprised three sections: 

(1) The ice breaker ‘reflective activity’: 

We developed a ‘pass the parcel’ style game, which was designed to get everyone energized and in the mood. Each table was given a bowl of folded paper slips, each printed with a prompt. Some were really simple, like ‘Describe a teacher that inspired you’, or, ‘Share an ‘aha’ moment you’ve have had while teaching’. Others were a bit more challenging, e.g.: ‘how would you re-design your module to make it more engaging?’ or, “Describe one of your modules as if to a 12-year-old”.  

On the day, we played music as the bowl was passed around the table and whoever was left holding it had a minute to pick out a slip and share their answer. At the end of the section, we asked people someone from each table to volunteer to share with the room their response to one of the questions.  

(2) The ‘How well do you know your students’ activity: 

We used Poll Everywhere to ask each of the five student survey questions to the room. After each question we reviewed the lecturer responses then shared the results from the student survey and briefly picked out where there were similarities and differences.  

(3) The ‘Embedding curiosity and engaging students’ activity 

We wanted to ensure lecturers were given a chance to apply insights from the first two activities so we then asked each table (team) to work together to embed curiosity and student engagement into a module.  One volunteer (the leader) from each table was to describe in brief (3 minutes) one of their modules, the teaching methods, type of assessment, and how feedback is provided. The team then had to come up with ideas/suggestions of how the module can be changed in order to make it more engaging and inspire curiosity in relation to: 

  • Teaching delivery (teaching methods) 
  • Assessment types 
  • Providing feedback 

We gave them 10 minutes to discuss then opened up the floor for team leaders to summarise their proposed changes.  

How it went

We got close to 70 responses to the student survey by the time of the away day (and have had more since!). We think it helped that we wrote the email and insisted the poll was at the top of the message (please do this poll – it will take 2 minutes) followed by the explanation.  

On the day the session went well. We played to our strengths (Liv is used to being on stage so took the lead) but it was really good to be doing it together.  We were concerned about balancing being fun and respectful, while also teaching challenging our lecturers. Happily, the audience were positive and the active approach to the session made it easier for us overall. However, it also meant we had to deal with unexpected outcomes and be confident in encouraging responses from the tables.   Also, while there were a few surprises in the outcomes of the student survey (including for us), it was great to see that there was also a lot of overlap and common ground.  

Liv concluded by impressing on lecturers to show their own love for their subjects and, for both of us, it was a rare opportunity to be able to say something we feel deeply about to lecturers.  

After the session we received lots of positive comments and had some great conversations, including with one lecturer who spoke with us for a long time asking about making his lectures more engaging. Also, we got a free lunch!  

Top Tips 

Our tips for other students are: 

  • It is definitely worth doing. Although it was a commitment at a busy time (we were studying for exams while developing the session), we felt the session had an impact and it made us feel like proper student representatives, particularly as, being first years, we hadn’t had many rep meetings by that point. 
  • You don’t have to start from scratch! We’re really happy for others to use and build on our approach and to chat with students and lecturers from other schools (see details of the activities from the session below and how to contact us).  

Comments and feedback 

I was incredibly impressed by Ismah and Liv’s contribution to the content and delivery of this session and have been busy encouraging Directors for Teaching and Learning from the other Sciences Schools I support to follow suit with their own students. My only regret is that Ismah and Liv’s session didn’t kick off the Teaching and Learning away day because it was a brilliant example of an engaging and active learning session which brought real energy to the day while providing that all-important student perspective.” (Dr Sam Hemsley, Academic Developer) 

Resources

Pass the parcel questions

Student survey questions

Contact

Please direct all queries to Luis Ponce Cuspinera.

Tagged with: , ,
Posted in Case Studies

Behold the Seminars: Reflections on Student Feedback

Maria Hadjimarkou is a Lecturer in Biological Psychology at the University of Sussex School of Psychology.  She is a Fellow of the Higher Education Academy and a member of the SEDA Community of Practice for Transitions. She has several years of experience in Higher Education in the UK and abroad. At Sussex, Maria is involved in activities that promote public awareness of the role of sleep in health and wellbeing and encourages her students to get involved in scholarship activities such as co-authoring articles on sleep and wellbeing for young readers.  

It is challenging for students to feel part of a community when they find themselves in a large amphitheatre among hundreds of other students. Convenors of these large modules like me acknowledge this as the downside of large cohorts. But here come the seminars! Based on student feedback, seminars can become instrumental in shifting things around. 

A feedback survey was launched during the second and third week of the Spring term in a large second-year core module. The module consisted of weekly lectures and bi-weekly seminars that focused on features of the material delivered in a lecture the week before. For these seminars, the students were split into groups of a minimum of 20 and a maximum of 50 students. The survey included module-specific questions but also tapped into various aspects of student experience. 

Based on students’ responses it became clear that we should be investing more in our seminars as they have the potential to be transformative. The overwhelming majority (84%) of the students who took part, reported that the seminars were a positive experience and 74% reported that the seminars offered the best opportunity for them to interact with their peers and to develop a sense of community. As they pointed out, interacting with each other may not come naturally to some, but if the conditions are right, they will discuss ideas and experiences and feel connected. 

In addition to encouraging peer interaction and a sense of community, seminars were identified by students as helpful in understanding the lecture material and gaining a broader understanding of the concepts covered in the lecture. They commented on the seminars saying that they were rewarding and interesting and used adjectives such as ‘great’, ‘fun’, ‘thought-provoking’, ‘inclusive’ and even ‘excellent’. 

Of course, not all seminars are created equal, and this is something that also came through in the survey as students made references to other seminar experiences that were not useful or fun. So, it is up to us the convenors, to structure seminars in a way that will foster interaction and inclusion. Good seminars have the potential to engage students and enhance their understanding of the lecture material as well as the greater context in which the material relates to, as in how it may apply to society or a particular field. Moreover, seminars allow students to express themselves and interact with their peers in a relaxed environment. It has been argued that seminars may help to ‘level the playing field’ in the sense that they help eliminate any disparities between students who face disadvantages (Betton & Branston, 2022). In addition, attending seminars has been linked to better student performance (Betton & Branston, 2022; Marburger, 2001; Stanca, 2006). 

Based on student feedback from this survey, successful seminars need to have a few key ingredients:

  1. Appropriate readings in terms of both quality and quantity: too much material or readings that are too difficult are likely to demotivate students and result in a negative experience or disengagement with the material altogether.
  2. Appropriate activities which students will find fun and at the same time interesting, as they get to explore material that is relevant to the lecture, and beyond.
  3. Approachable tutors. Their approach, energy and demeanour are crucial, and they can greatly influence the climate in the room and the degree to which students feel comfortable to participate or not.
  4. All the seminar components (i.e. structure, activities etc) should allow space and time for interaction in a relaxed environment, which is what ultimately makes seminars fundamentally different from lectures.

So, it seems that the humble 50-minute seminars may hold the key to a lot of the ‘plagues’ that we have been facing in Higher Education, especially following the Covid-19 pandemic and the general drop in student engagement. It is worth our time to plan and structure seminars carefully, as they may be the unsung hero of these large cohorts such as mine. Moreover, feedback surveys are vital in helping us understand how students perceive our teaching approaches, so that we can adjust and steer our efforts towards more effective learning and better teaching experience.

References

Tagged with:
Posted in Blog

Assessment in a world of Generative AI: What might we lose?

Dr Verona Ní Drisceoil, Reader in Legal Education, Sussex Law School

Introduction

For the most part, assessment in higher education is viewed in the negative as opposed to the positive. It is something to be endured, worked through, marked and managed. Assessment causes significant anxiety and stress for students and staff alike. Amid dealing with a cost-of-living crisis and increased mental health challenges, students are under significant pressure to achieve a ‘good degree’ to be able to progress to further study or work. For teachers and professional service staff, the pressure to mark and process hundreds of submissions in short timeframes makes assessment an incredibly challenging period. In addition, most higher education institutions in the United Kingdom score poorly on assessment and feedback in the National Student Survey (NSS), making assessment a major headache for university management teams (see also Harkin et al, 2022). And on top of all of that, we now have generative AI to contend with and the challenges that brings for assessment.

In this short article, I want to offer some reflections, and provocations, on the framing of assessment in higher education. Specifically, in thinking about what might be lost in the new reality of generative AI, I propose that we should think about, and indeed frame, assessment from the perspective of empowerment. In this respect, I advocate for assessment as, and for, empowerment. In an ideal world our assessments, notwithstanding the pressures mentioned above, should empower students, or at least have the potential to empower. They should encourage our students to be authentic, to show agency, to grow in confidence, and develop a range of transferable skills including, in particular, evaluative judgement (See case study assessment example). In this regard, assessment as empowerment as a conceptual frame builds on “assessment for learning” (Sambell, McDowell & Montgomery, 2013) and “assessment for social justice” (McArthur, 2023). Could assessment designed for empowerment, I ask, make life better for everyone; for teachers, for students and even our NSS scores?

The article will begin by reflecting on ‘assessment’ and ‘empowerment’ in a world with generative AI before then offering some initial thoughts on what assessment as, and for, empowerment might look like. I will conclude with some takeaway questions that might help us to think about assessment more meaningfully as we navigate the impact of generative AI on education and on life more generally. In contrast to the dominant narratives circulating on generative AI (on productivity, on efficiency, on saving time and making money), I question what might be lost in all of this in terms of humanity (see further Chamorro-Premuzic (2023) and strongly encourage a deeper questioning and critique of generative AI and what it means for future generations. This does not mean I am a Luddite and afraid to engage with AI (yes, I know AI is here to stay!) but rather that I want to maintain a critical standpoint. I want to focus on value – and what matters in life and in education. How is AI changing our lives, values, and fundamental ways of being? Drawing on the work of Bearman et al. (2024:1), I too argue that in an educational context we have a collective responsibility to ensure that humans (our students) do not relinquish their roles as arbiters of quality and truth.

Reflecting on the value of assessment

Without question, the ever-expanding presence of generative AI has challenged us as teachers and educators. It has made us uncomfortable. We no longer, to quote Dave Cormier, have the same power in assessment. This is unsettling. The presence of generative AI forces us as teachers to reflect on our roles, on how we design teaching and how we design assessment. It forces us, if we take the opportunity, to self-assess and ask: what is the value of assessment? (See further McArthur, 2023; Cormier, 2023) What do we value as teachers and what skills do we want our students to achieve through assessment? How can we empower our students through assessment? This questioning, I argue, should not start with how to design an assessment that will beat generative AI. For me, that is not a pedagogically sound start point. Moreover, it’s completely pointless as “the frontier moves so fast” (Dawson, 2024). I would encourage all teachers to stop for a moment to think about what you really want students in your module/discipline to leave university with? This is a great opportunity to really reflect on that question and to go back to basics as it were. Not every module should, I argue, be trying to teach everything to, and with, generative AI in mind. That, for me, is a dangerous path to take. As academics and critical scholars – in universities – let us remain critical. Ask questions of the impact of using these models, of the impact on humanity and the impact on learning through process and doing. Challenge the status quo. Who is driving the AI narrative and why? Why should we care as educators?

Empowerment in Assessment

According to the Oxford English Dictionary, empowerment speaks to agency, autonomy, and confidence. It notes that empowerment is “the fact or action of acquiring more control over one’s life or circumstances.” To empower someone is “to give (a person) the means, ability, or strength to do something”; to enable them. For me, assessment should be about offering a space for growth, to develop skills through process, to feel empowered through doing. This concept of assessment for empowerment can be seen to build on the concept of assessment for learning mentioned earlier. For Sambell et al. (2013), assessment for learning is focused on the promotion of effective learning. They note that, “what we assess and how we assess indicates what we value in learning – the subject content, skills, qualities and so on.” (Sambell et al. 2013:8). Assessment then should promote positive, and empowering messages about the type of learning we require (Sambell et al. 2013:11). It is focused on process and not just product.

As I write this piece, I am wondering whether one should even mention empowerment in the same sentence as generative AI. Perhaps one should. Some argue that using generative AI is empowering. Many have noted that generative AI helps you get started, it builds confidence etc. Quite literally, these large language models put words on the page – and very quickly at that. It is hard not to be tempted by these tools. We have also heard that many students (36% in a survey of 1200 students) use generative AI as a personal tutor (HEPI, 2023). One might argue that there is agency and growth here. Perhaps. But is it on a superficial, artificial level? Is it a matter of degree? Does it matter? I think it does and should.

Yes, generative AI may save a great deal of time on a task and to produce text, to put words on the page that speeds up a process but then where is the learning by doing? Is the purpose of assessment, of higher education to now support students to be able to use a generative language learning model with no authenticity, and questionable accuracy. Generative AI platforms are not truth machines, they hallucinate. If we adopt a view and approach that positively embraces generative AI (allows use of generative AI to produce an output), we are valuing the product and not the process. Adopting such an approach does not guarantee that students will develop and enhance the higher order thinking skills we should value so much in higher education. Chamorro-Premuzic (2023: 4) reminds us that generative AI could dramatically diminish our intellectual and social curiosity and discourage us from asking questions. We still need to teach our students key processes and knowledge and equip them with the ability and skills to be able to critique, evaluate and judge outputs. As Bearman et al. (2024:1) note, “university graduates should be able to effectively deploy the disciplinary knowledge gained within their degrees to distinguish trustworthy insights from those that are ‘hallucinatory’ and incorrect”.

But generative AI levels the playing field…

I have some sympathy for the argument that generative AI helps “level the playing field” in higher education. Specifically, that it helps students whose first language is not English to be able to access and understand teaching materials and assessments. However, I am still unconvinced this is a sufficient rationale to positively embrace generative AI tools in teaching and assessment without deeper critique and questioning of what may be lost pedagogically. If anything, this framing highlights how poor we are at supporting students whose first language is not English in (UK) universities. It could be argued that embracing generative AI to the extent being advocated allows universities to shirk certain responsibilities. Does a positive embracing of generative AI allow us to gloss over certain areas that we have failed at, as a sector e.g. supporting language proficiency in a meaningful way?

Assessment as Empowerment in a world with generative AI – some thoughts

So, what does assessment as empowerment look like in a world with generative AI? Is it even possible? I hope so. The following provides a few thoughts on where we might focus the discussion to achieving assessments that empower our students to continue to be arbiters of truth whilst developing a range of transferable and empowering skills:

  1. Value + programme level assessment: I suggest, we need to start this conversation from the perspective of value – what is the value of assessment – and then have joint, but critical, conversations at programme level. I am concerned about siloed responses to generative AI without wider department and school level conversations. For some departments and schools, it might be essential to embrace generative AI tools across teaching and in assessment (e.g. in computer science) but for others (e.g. law, my discipline), it is, I argue, essential that we now have programme level conversations about the value of assessment and what skills, literacies and practices we want our students to take away from their degree experience. Should our students decide to go into legal practice, what skills would society wish our future lawyers to have? They will still need to be able to write well, formulate an argument, show attention to detail, detect accuracy and truth in text sources, orally convince, persuade, and advocate. If you have not formulated your own argument, it will never be as persuasive as one self-generated. Generative AI, by removing the process of formulating an argument, robs one of the opportunities to truly develop persuasive advocacy skills. I do not believe that wider society would wish that legal advice be generated by AI. And even if it was, they would want someone to have the skills and evaluative judgement to know what is correct and not simply an hallucination. There is, to quote Bearman et al. (2024), an urgent need, in this new reality, to develop students’ evaluative judgement. Students need to be able to judge quality work of thy self and others.
  2. Oracy/Oral based assessments/mini-Vivas: One type of assessment I would like to see much more engagement with is oral based assessment. Perhaps, now is the time to embrace that. Voice work, and the ability to present confidently is such an important skill and one that I think we should place a much higher value on. We neglect this form of assessment in higher education in favour of written based assessments. This, I argue, is also problematic in terms of helping all students to build social capital. Within my own law module, my move from a traditional ‘write a case-note law essay’ to an oral case briefing assignment (with a ‘you are a trainee solicitor’ positioning) was based on a desire to ensure all students in my module had the opportunity to develop voice work skills within the module and not just in extra-curricular activities as can often be the case in law schools. I argue that maintaining these activities as ‘extra’ results in only those in more privileged positions being able to take part. Students who work, commute, or care for others are often excluded from these activities. That needs to change. Now might be the chance to do so. For those that argue that this is not possible at scale, I disagree. I have been able to implement the development of voice work skills into a core module of 350+. Yes, it is a challenge but with good module design and a good teaching team it can be achieved. 
  3. In person open book exams: Whilst I appreciate there are valid arguments about the problematic nature of in person exams in terms of stress and anxiety, and that they do not reflect real work practices, do in person open book exams offer a compromise and a way forward? Academic integrity scholars (See Philip Dawson, Cath Ellis) certainly argue that in person exams should be in the mix of programme level assessment. I wonder too if part of the resistance to go back to in person assessment is more about cost than an absolute commitment to accessibility and inclusion. Reasonable adjustments can still apply to in person assessments. They did before the pandemic and can again.
  4. Use more bespoke rubrics: Another response in the short term might be to add more bespoke marking rubrics for assessments. This might mean not adding any weighting for structure, grammar and syntax but adding a much higher weighting for other elements to reward process and engagement with materials, accuracy (on the law for example) and sources that are not merely artificial. I have used a bespoke weighted rubric in my module for the last two years and it works very well. Linked to points made above, there is also a section for evaluative judgement within that rubric. As part of the case briefing assessment, students are asked to provide an evaluative judgement post presentation as they would to their ‘supervising solicitor’ in a legal practice setting.

Takeaway questions

  1. What is the value of learning?
  2. What is the value of assessment?
  3. What skills, literacies and practices do you want your students to take away from your module and through your assessment?
  4. How can you design your assessment to empower your students; to help your students to be authentic, to grow and to develop confidence through doing and being?
  5. Think about process not just product.
  6. What role can evaluative judgment play in your teaching and assessment design?

Conclusion

For me, it is essential that universities and educators continue to approach the question and impact of generative AI in education from a critical standpoint. Let us, I advocate, remain critical. Ask questions. Challenge. What might be lost? Who is driving the AI narrative, and why? Why should we care as educators? What are we complicit in? As I have stated above, always consider the structures within which these shifts are happening. The dominant narrative and retort of “AI is here to stay so get over it” is not enough of a reason to not ask questions of what might be lost here. It is appreciated that universities are businesses too and it might seem neglectful, within a business model, not to approach this debate from the perspective of ‘let us not be left behind’ but we have a collective responsibility to be more critical, to question and to challenge the narrative. Given the rate of change in this area, it is incumbent on us as educators, to ask what might be lost in terms of truth, justice, humanity, and real connection. As Esther Perel reminds us, the AI we should be most concerned with is artificial intimacy – that lack of real connection and authenticity in how we show up in the world because of the negative impacts of technology. In a world of hyper-connectivity, we are often not connected at all. I can, as I am sure others can, attest to the negative impact technology has had on my life in terms of being present and meaningfully connected. At a time in higher education where mental health issues are at an all-time high, and confidence and lack of community and belonging are at a low point post pandemic, is a world with generative AI, I ask, going to have a positive or negative impact on connections, relationships, authenticity, truth, and humanity? We are, as human beings, wired for real connection – and hopefully authenticity as well, with all the flaws and vulnerability that come with that. We are not robots. If we embrace generative AI to the extent that is being encouraged by corporations, influencers and even universities, what might we lose?

Resources

Ajjawi, R., et al. (2023). From authentic assessment to authenticity in assessment: Broadening perspectives. Assessment and Evaluation in Higher Education, 1-12. [Online].

Arnold, L. (n.d.). Retrieved from https://lydia-arnold.com/

BBC News, (2023, 27 May) ChatGPT: US lawyer admits using AI for case research. Retrieved from https://www.bbc.co.uk/news/world-us-canada-65735769#

Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 1–13. Retrieved from https://doi.org/10.1080/02602938.2024.2335321

Chamorro-Premuzic, T., (2023) I, Human: AI, Automation, and the Quest to Reclaim What Makes us Unique. Harvard Business Review Press.

Compton, M. (2024, 16 April) Nuancing the discussions around GenAI in HE. Retrieved from https://mcompton.uk/ 

Cormier, D. (2023) ‘10 Minute Chat on Generative AI’. Retrieved from https://vimeo.com/866563584 [See series with Tim Fawns: https://www.monash.edu/learning-teaching/TeachHQ/Teaching-practices/artificial-intelligence/10-minute-chats-on-generative-ai]

Dawson, P. How to Stop Cheating. Retrieved from https://youtu.be/LNcuAmDP2cQ

Dawson, P. (2021) Defending Assessment Security in a Digital World. Routledge.

Freeman, J. (2024, February 1). Provide or punish? Students’ views on generative AI in higher education (HEPI Policy Note 51).

Harkin, et al. (2022). Student experiences of assessment and feedback in the National Student Survey: An analysis of student written responses with pedagogical implications. International Journal of Management and Applied Research, 9(2). Retrieved from https://ijmar.org/v9n2/22-006.pdf

McArthur, J. (2018). Assessment for Social Justice: Perspectives and Practices Within Higher Education. Bloomsbury Publishing Plc.

McArthur, J. (2023). Rethinking authentic assessment: Work, well-being, and society. Higher Education, 85(1), 85.

Tai, J., et al. (2023). Assessment for inclusion: Rethinking contemporary strategies in assessment design. Higher Education Research and Development, 42, 483.

Tagged with: , , , ,
Posted in Articles

The Evidence-Informed-Teaching Infographics Project: a novel and engaging way to communicate scholarship

Sue Robbins is Senior Lecturer in English Language and Director of Continuing Professional Development in the School of Media, Arts and Humanities.

Evidence-informed Teaching

Whereas research-informed teaching refers to the different ways in which students are exposed to research content and activity during their time at university, evidence-informed teaching refers to the teaching practices that research has shown will have the greatest impact on student learning.

Evidence-based practice is an approach that focuses practitioner attention on the use of empirical evidence in professional decision-making and action. As teaching practitioners, we draw on a range of sources of teaching knowledge, amassed over time. Evidence-informed teaching involves bringing together research from the Scholarship of Teaching and Learning (SoTL) with context and experience to see what works for us and for our learners.

The way evidence works to inform teaching and learning may not always be straightforward, perhaps because learning is the result of such a huge number of interactions, and research findings may sometimes be difficult to implement because it is unclear how to transfer the skills and expertise of teaching. But being familiar with relevant research evidence can help us think about the methodology we select to underpin the design of a module and help students achieve the learning outcomes, including theories of learning, our choice of materials, and classroom procedures. There’s useful information for Sussex colleagues on Educational Enhancement’s SoTL webpage here.

It is also the case that claims for the efficacy of a particular practice can sometimes be made without a broad enough evidence base, leading to the overgeneralising of concepts and ideas. This flipped learning remains under-theorised infographic synthesises the findings from a scoping review which found that despite the rapid growth in the number of articles about flipped learning, most of them failed to elaborate theoretical perspectives, making an analysis of its efficacy difficult.

The Evidence-informed-teaching Infographics Project

The Evidence-Informed-Teaching Infographics Project is a collaborative scholarship project designed to create a set of infographics which synthesise SoTL research in an attempt to bridge the research-practitioner divide. Sussex colleagues can be part of this cumulative knowledge-building project and contribute to the shared knowledge base of evidence that can positively impact student learning by contributing an infographic summary of a journal article that relates to your own interests or teaching practice.

The project began in the School of Media Arts and Humanities and is now expanding to all Schools. The infographics completed so far have been published on the MAH Scholarship blog, and other publishing opportunities will be available as the project expands, offering you an audience for your scholarship.

Reasons to join in

Evidence-informed faculty can make significant contributions to learning, teaching, assessment, and scholarship in their Schools and institutions. A recent post on the WONKHE blog makes the point that ‘robustly evidence-informed education is fundamental to supporting the development of ethical, sustainable and inclusive pedagogies to support learners.’

The Evidence-Informed-Teaching Infographics Project can play to your interests, wherever they lie. You might take a key journal article to summarise because you’ve noticed something in your own practice or teaching context that you’d like to know a bit more about; or you’ve been doing something for a while and want to see what current research says about it. Or it could be that you begin with the literature and identify something that you’d like to try out in your teaching or use it to adjust something you have been doing to better reflect research findings.

Summarising the research on an aspect of teaching and learning can help you distil your thinking, and sharing the infographic with colleagues to inform their practice is a generous way of passing on knowledge. The activity is manageable in size and can be a good way to find out more about teaching-related research, both through creating your own infographic and by reading those created by colleagues.

Keogh et al. (2024), writing about their own project designed to share health research with the public, comment on the huge potential of infographics to communicate SoTL to various stakeholders as summarised in this infographic:

10 Ways infographics can support scholarship of teaching and learning

1: Visualizing Data
Present data from studies, surveys, or assessments to visually represent trends, patterns, and statistics.

2: Sumarizing Research
Display concise and engaging research summaries to highlight key points and takeaways.

3: Explaining Theories
Breakdown complex pedagogical theories & concepts into visually appealing, understandable elements.

4: Sharing Best Practices
Provide practical tips and best practices based on research findings.

5: Comparing Teaching & Learning
Compare different teaching and learning approaches, outlining the pros and cons of each.

6: Promoting Reflection
Present data on student outcomes or feedback to help instructors assess their teaching and make data-driven improvements.

7: Communicating Professional Development
Provide teachers with concise and memorable takeaways from professional development activities.

8: Disseminating Scholarship
Share on social media platforms and websites to reach broader audiences.

9: Supporting Research Proposals
Use in grant proposals to enhance the readability and visual appeal of the project and expected outcomes.

10: Engaging Students
Integrate into classroom instruction to engage students visually and enhance their understanding of complex topics.

Creating and sharing your infographic

To create the infographics, colleagues in MAH have used the design tool Canva, which offers free access to educators. Canva offers a huge range of editable templates and when you have synthesised the key points of your article and can see how many sections you need, you can pick an appropriate one and copy/paste content into it. It’s worth noting that not all of the Canva templates meet the expected accessibility requirements, but the Educational Enhancement team are happy to advise.

Get in touch

It’s a great project to be involved with. Get in touch with Sue Robbins, Senior Lecturer, Department of Languages, MAH if you are interested – S.Robbins@sussex.ac.uk, or with Sarah Watson, Academic Developer –Sarah.Watson@sussex.ac.uk. All welcome!

References

Black, K. (2024). Doing academic careers differently. Retrieved from https://wonkhe.com/blogs/doing-academic-careers-differently/

Educational Enhancement, University of Sussex (n.d.). Scholarship of Teaching and Learning. Retrieved from https://staff.sussex.ac.uk/teaching/scholarship-of-teaching

Keogh, B., Nowell, L., Laios, E., McKendrick-Calder, L. Lucas Molitor, W., and Wilbur, K. (2024) ‘Using Infographics to Go Public With SoTL’. Teaching and Learning Inquiry 12 (March). Retrieved from https://journalhosting.ucalgary.ca/index.php/TLI/article/view/78078

Robbins, S. (2023). Flipped Learning Remains Under-theorised. Retrieved from https://blogs.sussex.ac.uk/mah/2023/04/24/flipped-learning-remains-under-theorised/

School of Media, Arts and Humanities, University of Sussex (n.d.) Scholarship in Media Arts & Humanities. Retrieved from https://blogs.sussex.ac.uk/mah

Tagged with: ,
Posted in Blog, Uncategorised

Encouraging attendance and engagement through portfolio assessment

Lynne Murphy is Professor of Linguistics at the University of Sussex. Her research and teaching concerns lexicology, lexical semantics, and pragmatics, and transatlantic Englishes. Twice a National Endowment for the Humanities Public Scholar, she is currently writing a book about small words.

A classroom filled with prepared students, interested in the subject and eager to talk with each other about the subject. This is the goal. This is a classroom in which people learn. This is a classroom in which teachers enjoy teaching.

But these days, it feels like the world is conspiring against such classrooms. Standing in the way are the cost-of-living crisis, the mental-health crisis, the perennial academic-timetabling crisis, not to mention the after-effects of pandemic lockdowns. Students, for the most part, want to participate in their learning—but it’s easy for reading, attendance, and course enrichment activities to fall down the list of priorities behind pulling a work shift, saving a bus fare, or staying under a warm duvet.

We tell students that active engagement in their course will be worth their while, but the evidence we give for that claim is fuzzy. The rewards of engagement are not necessarily immediate or immediately perceptible. And it’s easy to understand how the intention to participate falls down. I know that physical exercise will be worth my while, but it’s a slog. It’s also hard to know where to start (weights? cardio? flexibility?). So, it goes lower and lower on my to-do list while I do the easier things first.

My solution for the exercise problem is to make myself accountable to others: to book a spot on a scheduled class or arrange a walk with a friend. The exercise gets crossed off the list. And that’s what I try to do for students: to make the individual rewards of course engagement more concrete, so it goes up the to-do list. Then the whole class benefits from an engaged studentship. Portfolio assessment makes this very doable.

What is a portfolio?

A portfolio is a collection of work (i.e. more than one piece), related to a theme (i.e. the module topic), produced over a period of time (i.e. the semester) (University Modes of Assessment).

The key here is that the contents of a portfolio are not prescribed. The types of work involved can vary across modules. A portfolio for one module might involve learning journals and a podcast. For another it might be multiple drafts of an essay or two. How portfolios are assessed can vary too.

Incorporating engagement into the portfolio

Of course, the main part of a portfolio must be academic work that tests the learning outcomes of the module. Engagement activities should relate to these learning outcomes as well, but should focus more on taking part than on mastering academic skills/content. In my modules, these activities are called participation (and so, from here I use participation as a synonym for engagement).In my first-year modules, participation is 20% of the portfolio mark, in order to instil good engagement practices from the start. From second year, it goes down to 10%. Appendix 1 below this post gives first- and final-year examples.

For the assessment-period portfolio submission, students submit a ‘participation record’ that indicates which activities they did during the term (first- and final-year examples in Appendix 2 below this post). (Not shown, but available on request: the Canvas information pages that make clear what each of the participation activities involves.)

Portfolio-friendly engagement activities

Engagement activities in the portfolio should:Set clear expectations. Students should know what counts as participation and when their deadlines for it are.

  1.  Set clear expectations.
    Students should know what counts as participation and when their deadlines for it are.
  2.  Have a virtual paper trail.
    Anything on the participation record should be independently verifiable through Canvas or Sussex Direct. I.e. either the student should be submitting something to Canvas or the tutor should be counting something on one of those platforms.
  3.  Avoid any potential for bias.
    In particular, staff should not be grading students on the frequency or quality of contributions to seminar discussions, as our perceptions of who’s said what/how much are unreliable (and there is no paper trail).
  4.  Offer choice / be inclusive.
    Not all students can or will participate in the same ways. It should be possible to get a very good participation score without attending extra events or speaking in front of class.

And, of course, the module convenor should consider the workload their activities create for themselves—e.g. what expectations to set about feedback on these activities.

Potential engagement activities include:

  • doing assigned formative work for feedback
  • participating in activities in the classroom
    • e.g. quizzes on the week’s reading, unassessed presentations, writing up ‘minutes’ of seminars for posting on Canvas
  • reflecting on the teaching material or the process of learning
    • e.g. learning journals
  • engaging with tutors or peers outside the classroom
    • e.g. attending student hours, forming/attending study groups, contributing to Canvas discussions
  • doing extension activities beyond the classroom
    • e.g. attending research seminars, Skills Hub events
  • doing extra module work
    • e.g. taking online quizzes, doing supplementary assignments
  • attendance at teaching sessions.

No portfolio should try to include all of these! The nature of the subject, the level, the tutor’s workload, and the module learning outcomes (see below) should come into consideration.

Some of these are more about engaging with the subject or learning processes individually; others are about building community among the cohort—including the tutors. Some are controversial—many believe the last one in particular is undoable. So that one gets two further sections:

Assessing attendance: directly

I asked Sussex Academic Developer Sarah Watson to review the ‘legality’ of how I treat attendance in portfolio assessment. She wrote:

Currently, there is no University policy to say that we can’t grade attendance, though it is of course a contested issue due to cost of living, caring responsibilities etc., With this in mind, it is recommended that students should not be penalised for not attending their lectures and seminars. However, you offset this by:

1. having attendance as only one aspect of the participation mark

2. allowing students to get the grade if they have informed you that they cannot attend

In other words, it’s OK to consider attendance as part of an engagement/participation mark because (1) students who fail to attend can ‘make up’ for poor attendance by doing more of other activities, and (2) ‘notified’ absences don’t count against anyone. Take the example of a student who attended 11/22 sessions (lecture and seminar) but emailed the tutor about each of the absences when they happened; that student would have a 100% attendance record (22/22). If the same student had not emailed the tutor, then they would have achieved 50% attendance. Emailing is certainly not the same as attending, but keeping in touch with the tutor at least shows continued engagement in the module while acknowledging that perfect attendance is often not possible.

In recent years, I have treated attendance as up to 10 or 20 participation marks (see appendices), relying on those percentages. In a class where it’s worth 10, then, the 50% attender gets 5 points toward participation. Another way to do it is to do categorical marking: with a certain number of points for hitting a certain attendance threshold. Those who don’t meet that threshold will know they should make it up with other kinds of participation.

Assessing attendance: indirectly

Another way to ensure attendance is to have participation activities that happen during class time. Our first-year modules have reading quizzes at each session, ‘played’ like pub quizzes in teams. Those quiz scores contribute to the portfolio mark. Zero scores resulting from notified absences are removed from the quiz average. (For what it’s worth, these quizzes are very popular; they are often requested in student evaluations of other modules.)

The maths

The total participation ‘points’ available should add up to at least 100, so they resemble a percentage mark that can easily be figured into the portfolio. (In some of my modules, students can get more than 100 participation points, and so some students’ marks are lifted considerably by participation.) 

Students are told to strive to do at least slightly better on their participation mark than they expect to do the rest of the portfolio, so that the participation helps their grade.

There is an aspect of ‘the rich get richer/the poor get poorer’. Students who are already well-organised and keen are the most likely to do the most participation work. Students who don’t engage enough to even know about the participation opportunities are likely to have their mark taken down further by lack of participation. But in the middle, I see students who might be struggling (whether with the material or with the social aspects of learning) putting themselves into a place where learning is more active and possible.

Incidentally, having a participation element in the module does not seem to result in rampant grade inflation. Average marks on my portfolio-assessed modules are in the low-mid 60s, like the marks for other modules I’ve taught.

Learning Outcomes/Resits

The portfolio as a whole must assess whether the student has achieved the module’s learning outcomes (LOs). But because portfolios submitted in the re-sit period generally cannot involve participation activities, no learning outcomes can explicitly demand engagement/participation.

So, I treat the participation element of the portfolio as being other means of engaging with the content/skills LOs. That may be direct engagement with it (as when students submit the assigned formative work), supplemental (as when they go to events related to the module content or skills development), or indirect (as through attendance, where they get opportunities to develop and show learning).

Give it a try!

I am an evangelist for portfolio development, and I’d be happy to talk with any Sussex colleagues about their portfolio ideas. Contact me at m.l.murphy@sussex.ac.uk.

Appendices

Appendix 1

Appendix 2

Tagged with: , ,
Posted in Blog