­

Behold the Seminars: Reflections on Student Feedback

Maria Hadjimarkou is a Lecturer in Biological Psychology at the University of Sussex School of Psychology.  She is a Fellow of the Higher Education Academy and a member of the SEDA Community of Practice for Transitions. She has several years of experience in Higher Education in the UK and abroad. At Sussex, Maria is involved in activities that promote public awareness of the role of sleep in health and wellbeing and encourages her students to get involved in scholarship activities such as co-authoring articles on sleep and wellbeing for young readers.  

It is challenging for students to feel part of a community when they find themselves in a large amphitheatre among hundreds of other students. Convenors of these large modules like me acknowledge this as the downside of large cohorts. But here come the seminars! Based on student feedback, seminars can become instrumental in shifting things around. 

A feedback survey was launched during the second and third week of the Spring term in a large second-year core module. The module consisted of weekly lectures and bi-weekly seminars that focused on features of the material delivered in a lecture the week before. For these seminars, the students were split into groups of a minimum of 20 and a maximum of 50 students. The survey included module-specific questions but also tapped into various aspects of student experience. 

Based on students’ responses it became clear that we should be investing more in our seminars as they have the potential to be transformative. The overwhelming majority (84%) of the students who took part, reported that the seminars were a positive experience and 74% reported that the seminars offered the best opportunity for them to interact with their peers and to develop a sense of community. As they pointed out, interacting with each other may not come naturally to some, but if the conditions are right, they will discuss ideas and experiences and feel connected. 

In addition to encouraging peer interaction and a sense of community, seminars were identified by students as helpful in understanding the lecture material and gaining a broader understanding of the concepts covered in the lecture. They commented on the seminars saying that they were rewarding and interesting and used adjectives such as ‘great’, ‘fun’, ‘thought-provoking’, ‘inclusive’ and even ‘excellent’. 

Of course, not all seminars are created equal, and this is something that also came through in the survey as students made references to other seminar experiences that were not useful or fun. So, it is up to us the convenors, to structure seminars in a way that will foster interaction and inclusion. Good seminars have the potential to engage students and enhance their understanding of the lecture material as well as the greater context in which the material relates to, as in how it may apply to society or a particular field. Moreover, seminars allow students to express themselves and interact with their peers in a relaxed environment. It has been argued that seminars may help to ‘level the playing field’ in the sense that they help eliminate any disparities between students who face disadvantages (Betton & Branston, 2022). In addition, attending seminars has been linked to better student performance (Betton & Branston, 2022; Marburger, 2001; Stanca, 2006). 

Based on student feedback from this survey, successful seminars need to have a few key ingredients:

  1. Appropriate readings in terms of both quality and quantity: too much material or readings that are too difficult are likely to demotivate students and result in a negative experience or disengagement with the material altogether.
  2. Appropriate activities which students will find fun and at the same time interesting, as they get to explore material that is relevant to the lecture, and beyond.
  3. Approachable tutors. Their approach, energy and demeanour are crucial, and they can greatly influence the climate in the room and the degree to which students feel comfortable to participate or not.
  4. All the seminar components (i.e. structure, activities etc) should allow space and time for interaction in a relaxed environment, which is what ultimately makes seminars fundamentally different from lectures.

So, it seems that the humble 50-minute seminars may hold the key to a lot of the ‘plagues’ that we have been facing in Higher Education, especially following the Covid-19 pandemic and the general drop in student engagement. It is worth our time to plan and structure seminars carefully, as they may be the unsung hero of these large cohorts such as mine. Moreover, feedback surveys are vital in helping us understand how students perceive our teaching approaches, so that we can adjust and steer our efforts towards more effective learning and better teaching experience.

References

Tagged with:
Posted in Blog

Assessment in a world of Generative AI: What might we lose?

Dr Verona Ní Drisceoil, Reader in Legal Education, Sussex Law School

Introduction

For the most part, assessment in higher education is viewed in the negative as opposed to the positive. It is something to be endured, worked through, marked and managed. Assessment causes significant anxiety and stress for students and staff alike. Amid dealing with a cost-of-living crisis and increased mental health challenges, students are under significant pressure to achieve a ‘good degree’ to be able to progress to further study or work. For teachers and professional service staff, the pressure to mark and process hundreds of submissions in short timeframes makes assessment an incredibly challenging period. In addition, most higher education institutions in the United Kingdom score poorly on assessment and feedback in the National Student Survey (NSS), making assessment a major headache for university management teams (see also Harkin et al, 2022). And on top of all of that, we now have generative AI to contend with and the challenges that brings for assessment.

In this short article, I want to offer some reflections, and provocations, on the framing of assessment in higher education. Specifically, in thinking about what might be lost in the new reality of generative AI, I propose that we should think about, and indeed frame, assessment from the perspective of empowerment. In this respect, I advocate for assessment as, and for, empowerment. In an ideal world our assessments, notwithstanding the pressures mentioned above, should empower students, or at least have the potential to empower. They should encourage our students to be authentic, to show agency, to grow in confidence, and develop a range of transferable skills including, in particular, evaluative judgement (See case study assessment example). In this regard, assessment as empowerment as a conceptual frame builds on “assessment for learning” (Sambell, McDowell & Montgomery, 2013) and “assessment for social justice” (McArthur, 2023). Could assessment designed for empowerment, I ask, make life better for everyone; for teachers, for students and even our NSS scores?

The article will begin by reflecting on ‘assessment’ and ‘empowerment’ in a world with generative AI before then offering some initial thoughts on what assessment as, and for, empowerment might look like. I will conclude with some takeaway questions that might help us to think about assessment more meaningfully as we navigate the impact of generative AI on education and on life more generally. In contrast to the dominant narratives circulating on generative AI (on productivity, on efficiency, on saving time and making money), I question what might be lost in all of this in terms of humanity (see further Chamorro-Premuzic (2023) and strongly encourage a deeper questioning and critique of generative AI and what it means for future generations. This does not mean I am a Luddite and afraid to engage with AI (yes, I know AI is here to stay!) but rather that I want to maintain a critical standpoint. I want to focus on value – and what matters in life and in education. How is AI changing our lives, values, and fundamental ways of being? Drawing on the work of Bearman et al. (2024:1), I too argue that in an educational context we have a collective responsibility to ensure that humans (our students) do not relinquish their roles as arbiters of quality and truth.

Reflecting on the value of assessment

Without question, the ever-expanding presence of generative AI has challenged us as teachers and educators. It has made us uncomfortable. We no longer, to quote Dave Cormier, have the same power in assessment. This is unsettling. The presence of generative AI forces us as teachers to reflect on our roles, on how we design teaching and how we design assessment. It forces us, if we take the opportunity, to self-assess and ask: what is the value of assessment? (See further McArthur, 2023; Cormier, 2023) What do we value as teachers and what skills do we want our students to achieve through assessment? How can we empower our students through assessment? This questioning, I argue, should not start with how to design an assessment that will beat generative AI. For me, that is not a pedagogically sound start point. Moreover, it’s completely pointless as “the frontier moves so fast” (Dawson, 2024). I would encourage all teachers to stop for a moment to think about what you really want students in your module/discipline to leave university with? This is a great opportunity to really reflect on that question and to go back to basics as it were. Not every module should, I argue, be trying to teach everything to, and with, generative AI in mind. That, for me, is a dangerous path to take. As academics and critical scholars – in universities – let us remain critical. Ask questions of the impact of using these models, of the impact on humanity and the impact on learning through process and doing. Challenge the status quo. Who is driving the AI narrative and why? Why should we care as educators?

Empowerment in Assessment

According to the Oxford English Dictionary, empowerment speaks to agency, autonomy, and confidence. It notes that empowerment is “the fact or action of acquiring more control over one’s life or circumstances.” To empower someone is “to give (a person) the means, ability, or strength to do something”; to enable them. For me, assessment should be about offering a space for growth, to develop skills through process, to feel empowered through doing. This concept of assessment for empowerment can be seen to build on the concept of assessment for learning mentioned earlier. For Sambell et al. (2013), assessment for learning is focused on the promotion of effective learning. They note that, “what we assess and how we assess indicates what we value in learning – the subject content, skills, qualities and so on.” (Sambell et al. 2013:8). Assessment then should promote positive, and empowering messages about the type of learning we require (Sambell et al. 2013:11). It is focused on process and not just product.

As I write this piece, I am wondering whether one should even mention empowerment in the same sentence as generative AI. Perhaps one should. Some argue that using generative AI is empowering. Many have noted that generative AI helps you get started, it builds confidence etc. Quite literally, these large language models put words on the page – and very quickly at that. It is hard not to be tempted by these tools. We have also heard that many students (36% in a survey of 1200 students) use generative AI as a personal tutor (HEPI, 2023). One might argue that there is agency and growth here. Perhaps. But is it on a superficial, artificial level? Is it a matter of degree? Does it matter? I think it does and should.

Yes, generative AI may save a great deal of time on a task and to produce text, to put words on the page that speeds up a process but then where is the learning by doing? Is the purpose of assessment, of higher education to now support students to be able to use a generative language learning model with no authenticity, and questionable accuracy. Generative AI platforms are not truth machines, they hallucinate. If we adopt a view and approach that positively embraces generative AI (allows use of generative AI to produce an output), we are valuing the product and not the process. Adopting such an approach does not guarantee that students will develop and enhance the higher order thinking skills we should value so much in higher education. Chamorro-Premuzic (2023: 4) reminds us that generative AI could dramatically diminish our intellectual and social curiosity and discourage us from asking questions. We still need to teach our students key processes and knowledge and equip them with the ability and skills to be able to critique, evaluate and judge outputs. As Bearman et al. (2024:1) note, “university graduates should be able to effectively deploy the disciplinary knowledge gained within their degrees to distinguish trustworthy insights from those that are ‘hallucinatory’ and incorrect”.

But generative AI levels the playing field…

I have some sympathy for the argument that generative AI helps “level the playing field” in higher education. Specifically, that it helps students whose first language is not English to be able to access and understand teaching materials and assessments. However, I am still unconvinced this is a sufficient rationale to positively embrace generative AI tools in teaching and assessment without deeper critique and questioning of what may be lost pedagogically. If anything, this framing highlights how poor we are at supporting students whose first language is not English in (UK) universities. It could be argued that embracing generative AI to the extent being advocated allows universities to shirk certain responsibilities. Does a positive embracing of generative AI allow us to gloss over certain areas that we have failed at, as a sector e.g. supporting language proficiency in a meaningful way?

Assessment as Empowerment in a world with generative AI – some thoughts

So, what does assessment as empowerment look like in a world with generative AI? Is it even possible? I hope so. The following provides a few thoughts on where we might focus the discussion to achieving assessments that empower our students to continue to be arbiters of truth whilst developing a range of transferable and empowering skills:

  1. Value + programme level assessment: I suggest, we need to start this conversation from the perspective of value – what is the value of assessment – and then have joint, but critical, conversations at programme level. I am concerned about siloed responses to generative AI without wider department and school level conversations. For some departments and schools, it might be essential to embrace generative AI tools across teaching and in assessment (e.g. in computer science) but for others (e.g. law, my discipline), it is, I argue, essential that we now have programme level conversations about the value of assessment and what skills, literacies and practices we want our students to take away from their degree experience. Should our students decide to go into legal practice, what skills would society wish our future lawyers to have? They will still need to be able to write well, formulate an argument, show attention to detail, detect accuracy and truth in text sources, orally convince, persuade, and advocate. If you have not formulated your own argument, it will never be as persuasive as one self-generated. Generative AI, by removing the process of formulating an argument, robs one of the opportunities to truly develop persuasive advocacy skills. I do not believe that wider society would wish that legal advice be generated by AI. And even if it was, they would want someone to have the skills and evaluative judgement to know what is correct and not simply an hallucination. There is, to quote Bearman et al. (2024), an urgent need, in this new reality, to develop students’ evaluative judgement. Students need to be able to judge quality work of thy self and others.
  2. Oracy/Oral based assessments/mini-Vivas: One type of assessment I would like to see much more engagement with is oral based assessment. Perhaps, now is the time to embrace that. Voice work, and the ability to present confidently is such an important skill and one that I think we should place a much higher value on. We neglect this form of assessment in higher education in favour of written based assessments. This, I argue, is also problematic in terms of helping all students to build social capital. Within my own law module, my move from a traditional ‘write a case-note law essay’ to an oral case briefing assignment (with a ‘you are a trainee solicitor’ positioning) was based on a desire to ensure all students in my module had the opportunity to develop voice work skills within the module and not just in extra-curricular activities as can often be the case in law schools. I argue that maintaining these activities as ‘extra’ results in only those in more privileged positions being able to take part. Students who work, commute, or care for others are often excluded from these activities. That needs to change. Now might be the chance to do so. For those that argue that this is not possible at scale, I disagree. I have been able to implement the development of voice work skills into a core module of 350+. Yes, it is a challenge but with good module design and a good teaching team it can be achieved. 
  3. In person open book exams: Whilst I appreciate there are valid arguments about the problematic nature of in person exams in terms of stress and anxiety, and that they do not reflect real work practices, do in person open book exams offer a compromise and a way forward? Academic integrity scholars (See Philip Dawson, Cath Ellis) certainly argue that in person exams should be in the mix of programme level assessment. I wonder too if part of the resistance to go back to in person assessment is more about cost than an absolute commitment to accessibility and inclusion. Reasonable adjustments can still apply to in person assessments. They did before the pandemic and can again.
  4. Use more bespoke rubrics: Another response in the short term might be to add more bespoke marking rubrics for assessments. This might mean not adding any weighting for structure, grammar and syntax but adding a much higher weighting for other elements to reward process and engagement with materials, accuracy (on the law for example) and sources that are not merely artificial. I have used a bespoke weighted rubric in my module for the last two years and it works very well. Linked to points made above, there is also a section for evaluative judgement within that rubric. As part of the case briefing assessment, students are asked to provide an evaluative judgement post presentation as they would to their ‘supervising solicitor’ in a legal practice setting.

Takeaway questions

  1. What is the value of learning?
  2. What is the value of assessment?
  3. What skills, literacies and practices do you want your students to take away from your module and through your assessment?
  4. How can you design your assessment to empower your students; to help your students to be authentic, to grow and to develop confidence through doing and being?
  5. Think about process not just product.
  6. What role can evaluative judgment play in your teaching and assessment design?

Conclusion

For me, it is essential that universities and educators continue to approach the question and impact of generative AI in education from a critical standpoint. Let us, I advocate, remain critical. Ask questions. Challenge. What might be lost? Who is driving the AI narrative, and why? Why should we care as educators? What are we complicit in? As I have stated above, always consider the structures within which these shifts are happening. The dominant narrative and retort of “AI is here to stay so get over it” is not enough of a reason to not ask questions of what might be lost here. It is appreciated that universities are businesses too and it might seem neglectful, within a business model, not to approach this debate from the perspective of ‘let us not be left behind’ but we have a collective responsibility to be more critical, to question and to challenge the narrative. Given the rate of change in this area, it is incumbent on us as educators, to ask what might be lost in terms of truth, justice, humanity, and real connection. As Esther Perel reminds us, the AI we should be most concerned with is artificial intimacy – that lack of real connection and authenticity in how we show up in the world because of the negative impacts of technology. In a world of hyper-connectivity, we are often not connected at all. I can, as I am sure others can, attest to the negative impact technology has had on my life in terms of being present and meaningfully connected. At a time in higher education where mental health issues are at an all-time high, and confidence and lack of community and belonging are at a low point post pandemic, is a world with generative AI, I ask, going to have a positive or negative impact on connections, relationships, authenticity, truth, and humanity? We are, as human beings, wired for real connection – and hopefully authenticity as well, with all the flaws and vulnerability that come with that. We are not robots. If we embrace generative AI to the extent that is being encouraged by corporations, influencers and even universities, what might we lose?

Resources

Ajjawi, R., et al. (2023). From authentic assessment to authenticity in assessment: Broadening perspectives. Assessment and Evaluation in Higher Education, 1-12. [Online].

Arnold, L. (n.d.). Retrieved from https://lydia-arnold.com/

BBC News, (2023, 27 May) ChatGPT: US lawyer admits using AI for case research. Retrieved from https://www.bbc.co.uk/news/world-us-canada-65735769#

Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 1–13. Retrieved from https://doi.org/10.1080/02602938.2024.2335321

Chamorro-Premuzic, T., (2023) I, Human: AI, Automation, and the Quest to Reclaim What Makes us Unique. Harvard Business Review Press.

Compton, M. (2024, 16 April) Nuancing the discussions around GenAI in HE. Retrieved from https://mcompton.uk/ 

Cormier, D. (2023) ‘10 Minute Chat on Generative AI’. Retrieved from https://vimeo.com/866563584 [See series with Tim Fawns: https://www.monash.edu/learning-teaching/TeachHQ/Teaching-practices/artificial-intelligence/10-minute-chats-on-generative-ai]

Dawson, P. How to Stop Cheating. Retrieved from https://youtu.be/LNcuAmDP2cQ

Dawson, P. (2021) Defending Assessment Security in a Digital World. Routledge.

Freeman, J. (2024, February 1). Provide or punish? Students’ views on generative AI in higher education (HEPI Policy Note 51).

Harkin, et al. (2022). Student experiences of assessment and feedback in the National Student Survey: An analysis of student written responses with pedagogical implications. International Journal of Management and Applied Research, 9(2). Retrieved from https://ijmar.org/v9n2/22-006.pdf

McArthur, J. (2018). Assessment for Social Justice: Perspectives and Practices Within Higher Education. Bloomsbury Publishing Plc.

McArthur, J. (2023). Rethinking authentic assessment: Work, well-being, and society. Higher Education, 85(1), 85.

Tai, J., et al. (2023). Assessment for inclusion: Rethinking contemporary strategies in assessment design. Higher Education Research and Development, 42, 483.

Tagged with: , , , ,
Posted in Articles

The Evidence-Informed-Teaching Infographics Project: a novel and engaging way to communicate scholarship

Sue Robbins is Senior Lecturer in English Language and Director of Continuing Professional Development in the School of Media, Arts and Humanities.

Evidence-informed Teaching

Whereas research-informed teaching refers to the different ways in which students are exposed to research content and activity during their time at university, evidence-informed teaching refers to the teaching practices that research has shown will have the greatest impact on student learning.

Evidence-based practice is an approach that focuses practitioner attention on the use of empirical evidence in professional decision-making and action. As teaching practitioners, we draw on a range of sources of teaching knowledge, amassed over time. Evidence-informed teaching involves bringing together research from the Scholarship of Teaching and Learning (SoTL) with context and experience to see what works for us and for our learners.

The way evidence works to inform teaching and learning may not always be straightforward, perhaps because learning is the result of such a huge number of interactions, and research findings may sometimes be difficult to implement because it is unclear how to transfer the skills and expertise of teaching. But being familiar with relevant research evidence can help us think about the methodology we select to underpin the design of a module and help students achieve the learning outcomes, including theories of learning, our choice of materials, and classroom procedures. There’s useful information for Sussex colleagues on Educational Enhancement’s SoTL webpage here.

It is also the case that claims for the efficacy of a particular practice can sometimes be made without a broad enough evidence base, leading to the overgeneralising of concepts and ideas. This flipped learning remains under-theorised infographic synthesises the findings from a scoping review which found that despite the rapid growth in the number of articles about flipped learning, most of them failed to elaborate theoretical perspectives, making an analysis of its efficacy difficult.

The Evidence-informed-teaching Infographics Project

The Evidence-Informed-Teaching Infographics Project is a collaborative scholarship project designed to create a set of infographics which synthesise SoTL research in an attempt to bridge the research-practitioner divide. Sussex colleagues can be part of this cumulative knowledge-building project and contribute to the shared knowledge base of evidence that can positively impact student learning by contributing an infographic summary of a journal article that relates to your own interests or teaching practice.

The project began in the School of Media Arts and Humanities and is now expanding to all Schools. The infographics completed so far have been published on the MAH Scholarship blog, and other publishing opportunities will be available as the project expands, offering you an audience for your scholarship.

Reasons to join in

Evidence-informed faculty can make significant contributions to learning, teaching, assessment, and scholarship in their Schools and institutions. A recent post on the WONKHE blog makes the point that ‘robustly evidence-informed education is fundamental to supporting the development of ethical, sustainable and inclusive pedagogies to support learners.’

The Evidence-Informed-Teaching Infographics Project can play to your interests, wherever they lie. You might take a key journal article to summarise because you’ve noticed something in your own practice or teaching context that you’d like to know a bit more about; or you’ve been doing something for a while and want to see what current research says about it. Or it could be that you begin with the literature and identify something that you’d like to try out in your teaching or use it to adjust something you have been doing to better reflect research findings.

Summarising the research on an aspect of teaching and learning can help you distil your thinking, and sharing the infographic with colleagues to inform their practice is a generous way of passing on knowledge. The activity is manageable in size and can be a good way to find out more about teaching-related research, both through creating your own infographic and by reading those created by colleagues.

Keogh et al. (2024), writing about their own project designed to share health research with the public, comment on the huge potential of infographics to communicate SoTL to various stakeholders as summarised in this infographic:

10 Ways infographics can support scholarship of teaching and learning

1: Visualizing Data
Present data from studies, surveys, or assessments to visually represent trends, patterns, and statistics.

2: Sumarizing Research
Display concise and engaging research summaries to highlight key points and takeaways.

3: Explaining Theories
Breakdown complex pedagogical theories & concepts into visually appealing, understandable elements.

4: Sharing Best Practices
Provide practical tips and best practices based on research findings.

5: Comparing Teaching & Learning
Compare different teaching and learning approaches, outlining the pros and cons of each.

6: Promoting Reflection
Present data on student outcomes or feedback to help instructors assess their teaching and make data-driven improvements.

7: Communicating Professional Development
Provide teachers with concise and memorable takeaways from professional development activities.

8: Disseminating Scholarship
Share on social media platforms and websites to reach broader audiences.

9: Supporting Research Proposals
Use in grant proposals to enhance the readability and visual appeal of the project and expected outcomes.

10: Engaging Students
Integrate into classroom instruction to engage students visually and enhance their understanding of complex topics.

Creating and sharing your infographic

To create the infographics, colleagues in MAH have used the design tool Canva, which offers free access to educators. Canva offers a huge range of editable templates and when you have synthesised the key points of your article and can see how many sections you need, you can pick an appropriate one and copy/paste content into it. It’s worth noting that not all of the Canva templates meet the expected accessibility requirements, but the Educational Enhancement team are happy to advise.

Get in touch

It’s a great project to be involved with. Get in touch with Sue Robbins, Senior Lecturer, Department of Languages, MAH if you are interested – S.Robbins@sussex.ac.uk, or with Sarah Watson, Academic Developer –Sarah.Watson@sussex.ac.uk. All welcome!

References

Black, K. (2024). Doing academic careers differently. Retrieved from https://wonkhe.com/blogs/doing-academic-careers-differently/

Educational Enhancement, University of Sussex (n.d.). Scholarship of Teaching and Learning. Retrieved from https://staff.sussex.ac.uk/teaching/scholarship-of-teaching

Keogh, B., Nowell, L., Laios, E., McKendrick-Calder, L. Lucas Molitor, W., and Wilbur, K. (2024) ‘Using Infographics to Go Public With SoTL’. Teaching and Learning Inquiry 12 (March). Retrieved from https://journalhosting.ucalgary.ca/index.php/TLI/article/view/78078

Robbins, S. (2023). Flipped Learning Remains Under-theorised. Retrieved from https://blogs.sussex.ac.uk/mah/2023/04/24/flipped-learning-remains-under-theorised/

School of Media, Arts and Humanities, University of Sussex (n.d.) Scholarship in Media Arts & Humanities. Retrieved from https://blogs.sussex.ac.uk/mah

Tagged with: ,
Posted in Blog, Uncategorised

Encouraging attendance and engagement through portfolio assessment

Lynne Murphy is Professor of Linguistics at the University of Sussex. Her research and teaching concerns lexicology, lexical semantics, and pragmatics, and transatlantic Englishes. Twice a National Endowment for the Humanities Public Scholar, she is currently writing a book about small words.

A classroom filled with prepared students, interested in the subject and eager to talk with each other about the subject. This is the goal. This is a classroom in which people learn. This is a classroom in which teachers enjoy teaching.

But these days, it feels like the world is conspiring against such classrooms. Standing in the way are the cost-of-living crisis, the mental-health crisis, the perennial academic-timetabling crisis, not to mention the after-effects of pandemic lockdowns. Students, for the most part, want to participate in their learning—but it’s easy for reading, attendance, and course enrichment activities to fall down the list of priorities behind pulling a work shift, saving a bus fare, or staying under a warm duvet.

We tell students that active engagement in their course will be worth their while, but the evidence we give for that claim is fuzzy. The rewards of engagement are not necessarily immediate or immediately perceptible. And it’s easy to understand how the intention to participate falls down. I know that physical exercise will be worth my while, but it’s a slog. It’s also hard to know where to start (weights? cardio? flexibility?). So, it goes lower and lower on my to-do list while I do the easier things first.

My solution for the exercise problem is to make myself accountable to others: to book a spot on a scheduled class or arrange a walk with a friend. The exercise gets crossed off the list. And that’s what I try to do for students: to make the individual rewards of course engagement more concrete, so it goes up the to-do list. Then the whole class benefits from an engaged studentship. Portfolio assessment makes this very doable.

What is a portfolio?

A portfolio is a collection of work (i.e. more than one piece), related to a theme (i.e. the module topic), produced over a period of time (i.e. the semester) (University Modes of Assessment).

The key here is that the contents of a portfolio are not prescribed. The types of work involved can vary across modules. A portfolio for one module might involve learning journals and a podcast. For another it might be multiple drafts of an essay or two. How portfolios are assessed can vary too.

Incorporating engagement into the portfolio

Of course, the main part of a portfolio must be academic work that tests the learning outcomes of the module. Engagement activities should relate to these learning outcomes as well, but should focus more on taking part than on mastering academic skills/content. In my modules, these activities are called participation (and so, from here I use participation as a synonym for engagement).In my first-year modules, participation is 20% of the portfolio mark, in order to instil good engagement practices from the start. From second year, it goes down to 10%. Appendix 1 below this post gives first- and final-year examples.

For the assessment-period portfolio submission, students submit a ‘participation record’ that indicates which activities they did during the term (first- and final-year examples in Appendix 2 below this post). (Not shown, but available on request: the Canvas information pages that make clear what each of the participation activities involves.)

Portfolio-friendly engagement activities

Engagement activities in the portfolio should:Set clear expectations. Students should know what counts as participation and when their deadlines for it are.

  1.  Set clear expectations.
    Students should know what counts as participation and when their deadlines for it are.
  2.  Have a virtual paper trail.
    Anything on the participation record should be independently verifiable through Canvas or Sussex Direct. I.e. either the student should be submitting something to Canvas or the tutor should be counting something on one of those platforms.
  3.  Avoid any potential for bias.
    In particular, staff should not be grading students on the frequency or quality of contributions to seminar discussions, as our perceptions of who’s said what/how much are unreliable (and there is no paper trail).
  4.  Offer choice / be inclusive.
    Not all students can or will participate in the same ways. It should be possible to get a very good participation score without attending extra events or speaking in front of class.

And, of course, the module convenor should consider the workload their activities create for themselves—e.g. what expectations to set about feedback on these activities.

Potential engagement activities include:

  • doing assigned formative work for feedback
  • participating in activities in the classroom
    • e.g. quizzes on the week’s reading, unassessed presentations, writing up ‘minutes’ of seminars for posting on Canvas
  • reflecting on the teaching material or the process of learning
    • e.g. learning journals
  • engaging with tutors or peers outside the classroom
    • e.g. attending student hours, forming/attending study groups, contributing to Canvas discussions
  • doing extension activities beyond the classroom
    • e.g. attending research seminars, Skills Hub events
  • doing extra module work
    • e.g. taking online quizzes, doing supplementary assignments
  • attendance at teaching sessions.

No portfolio should try to include all of these! The nature of the subject, the level, the tutor’s workload, and the module learning outcomes (see below) should come into consideration.

Some of these are more about engaging with the subject or learning processes individually; others are about building community among the cohort—including the tutors. Some are controversial—many believe the last one in particular is undoable. So that one gets two further sections:

Assessing attendance: directly

I asked Sussex Academic Developer Sarah Watson to review the ‘legality’ of how I treat attendance in portfolio assessment. She wrote:

Currently, there is no University policy to say that we can’t grade attendance, though it is of course a contested issue due to cost of living, caring responsibilities etc., With this in mind, it is recommended that students should not be penalised for not attending their lectures and seminars. However, you offset this by:

1. having attendance as only one aspect of the participation mark

2. allowing students to get the grade if they have informed you that they cannot attend

In other words, it’s OK to consider attendance as part of an engagement/participation mark because (1) students who fail to attend can ‘make up’ for poor attendance by doing more of other activities, and (2) ‘notified’ absences don’t count against anyone. Take the example of a student who attended 11/22 sessions (lecture and seminar) but emailed the tutor about each of the absences when they happened; that student would have a 100% attendance record (22/22). If the same student had not emailed the tutor, then they would have achieved 50% attendance. Emailing is certainly not the same as attending, but keeping in touch with the tutor at least shows continued engagement in the module while acknowledging that perfect attendance is often not possible.

In recent years, I have treated attendance as up to 10 or 20 participation marks (see appendices), relying on those percentages. In a class where it’s worth 10, then, the 50% attender gets 5 points toward participation. Another way to do it is to do categorical marking: with a certain number of points for hitting a certain attendance threshold. Those who don’t meet that threshold will know they should make it up with other kinds of participation.

Assessing attendance: indirectly

Another way to ensure attendance is to have participation activities that happen during class time. Our first-year modules have reading quizzes at each session, ‘played’ like pub quizzes in teams. Those quiz scores contribute to the portfolio mark. Zero scores resulting from notified absences are removed from the quiz average. (For what it’s worth, these quizzes are very popular; they are often requested in student evaluations of other modules.)

The maths

The total participation ‘points’ available should add up to at least 100, so they resemble a percentage mark that can easily be figured into the portfolio. (In some of my modules, students can get more than 100 participation points, and so some students’ marks are lifted considerably by participation.) 

Students are told to strive to do at least slightly better on their participation mark than they expect to do the rest of the portfolio, so that the participation helps their grade.

There is an aspect of ‘the rich get richer/the poor get poorer’. Students who are already well-organised and keen are the most likely to do the most participation work. Students who don’t engage enough to even know about the participation opportunities are likely to have their mark taken down further by lack of participation. But in the middle, I see students who might be struggling (whether with the material or with the social aspects of learning) putting themselves into a place where learning is more active and possible.

Incidentally, having a participation element in the module does not seem to result in rampant grade inflation. Average marks on my portfolio-assessed modules are in the low-mid 60s, like the marks for other modules I’ve taught.

Learning Outcomes/Resits

The portfolio as a whole must assess whether the student has achieved the module’s learning outcomes (LOs). But because portfolios submitted in the re-sit period generally cannot involve participation activities, no learning outcomes can explicitly demand engagement/participation.

So, I treat the participation element of the portfolio as being other means of engaging with the content/skills LOs. That may be direct engagement with it (as when students submit the assigned formative work), supplemental (as when they go to events related to the module content or skills development), or indirect (as through attendance, where they get opportunities to develop and show learning).

Give it a try!

I am an evangelist for portfolio development, and I’d be happy to talk with any Sussex colleagues about their portfolio ideas. Contact me at m.l.murphy@sussex.ac.uk.

Appendices

Appendix 1

Appendix 2

Tagged with: , ,
Posted in Blog

Developing a feedback policy for Life Science

Dr Joanna Richardson

Dr Joanna Richardson, Senior Lecturer in Biochemistry, explains how the School of Life Sciences developed and implemented its feedback policy.  

What we did: 

Over the summer of 2021 I led a working group in the Life Sciences tasked with developing a new School assessment and feedback policy. This was in response to student feedback, via module evaluations and NSS responses, that assessment and feedback needed improvement in the School. While the Unviersity of Sussex has a marking, moderation and feedback policy, the principles set out within it are necessarily broad. Having a school policy would, we hoped, provide a for staff on how to effectively translate policy into action by providing more detailed principles for staff in Life Sciences and, therefore, help to improve feedback clarity and consistency for students and staff.  

How we did it: 

The School didn’t have its own assessment and feedback policy so I had to start from scratch. Having brought together a working group,* I convened a few initial meetings to discuss and agree the purpose of the policy, i.e. that problems the policy sought to address, and what should be included within it.  

I also looked at examples of practice in other schools and universities and teased out common themes. I also dipped into the pedagogic literature, notably the HEA feedback toolkit, and the National Forum for the Enhancement of Teaching and Learning in Higher Education. I took a lot of inspiration from the University of Sussex Business School because they had worked with student Connectors to develop clear assessment criteria. 

I then drafted a policy for discussion and further refinement with the working group, boards of study (department) meetings, the School education committee and, importantly, some of our students. Happily, the proposals were, bar a few suggestions for refinement, well received and the resulting policy articulates and supports three key principles: 

  1. Transparency: Clear communication of practices to staff and students 
  1. Consistency: Practices are applied consistently and fairly across the School 
  1. Relevance: Students can use assessment and feedback for effective learning 

The policy also clarifies the expectation that feedback should: refer to the assessment criteria qualitative adjectives (e.g. ‘Good’ etc), that it should be constructive. It also provided an indication of reasonable length for written feedback (100-200 words).  

Finally, to help the translation of policy into action, the key points of the policy (including a link to the whole document) are articulated on our staff facing web page. We also created student guidance, which is linked to from all canvas sites via our School template. 

Impact and feedback 

One of the key action points for staff was to include a 3-point “feedback template” in writing feedback for students, to standardise and simplify feedback. In response to staff comments, this was subsequently reduced to a 2-point template: 

  • Areas in which you did well in this assessment 
  • Areas in which you did less well in this assessment and guidance for improvement”) The external examiners have commented very favourably on the use of the template as an example of good practice, and our NSS scores in this area have improved. 

I think it’s important that policies like this one are responsive and can change. Of course, ensuring consistency in the application of any policy can be a challenge. Nevertheless, staff have appreciated clear guidance on the content and quantity of feedback, including the realization that it isn’t necessary to write reams of text to provide effective feedback. This has saved some markers time and is especially useful guidance for new members of faculty and our doctoral tutors.   

Top Tips  

  • Make sure your policy isn’t left on the digital shelf – has to be visible, findable, publicized, accessible and responsive to feedback. 
  • When developing such a policy, convene a group of experts (people with significant teaching and assessment experience) and look at what other institutions are doing (your Academic Developer can also help with this!) 
  • Consult students at every stage. 

* This working group included Joanna Richardson (Biochemistry, lead), Dr Jenna Macciochi (Biochemistry/BMS), Dr Zahid Pranjol (BMS/DoSE), Dr Louise Newnham (Genome), Dr Camilla Tornoe (Neuroscience), Dr Valentina Scarponi (EBE), Dr Shane Lo Fan Hin (Chemistry), Dr Claire May (Pharmacy), Dr Lorraine Smith (Foundation), Dr Christina Magkoufopoulou (TEL), and Ms Amy Horwood (Deputy School Administrator) 

Related links 

See Educational Enhancement guidance on: 

Tagged with: , , , ,
Posted in Case Studies, Uncategorised

Sussex Education Festival, 10-11th July 2024, – Call for Participation

Sarah Watson, Mellow Sadik and Kelly Coate at last year’s Education Festival

Following the success of last year’s inaugural Education Festival, we’re excited to announce a Call for Participation for the Sussex Education Festival 2024. Hosted over two days (10th July in person and 11th July online), the festival will provide a space for colleagues from across the University to share their experiences, insights and innovation in teaching, learning and assessment.  

The festival will have the three drivers of change from the upcoming Sussex 2035 strategy as its core themes: Human Flourishing, Environmental Sustainability and Digital and Data Futures. We have a variety of presentation and discussion formats to choose from, and we particularly encourage presentations co-delivered with students. We have some student participation vouchers we can offer- please get in touch with EE for more information. 

The three core themes of the festival can be interpreted as broadly as possible. Some suggestions for potential topics could be: 

Human Flourishing:  

  • Building student belonging 
  • Supporting inclusive learning communities 
  • Social justice pedagogies, decolonising the curriculum 
  • Student creativity and self-expression 
  • Student resilience and wellbeing 

Sustainability: 

  • Education for Sustainable Development 
  • Authentic assessment and feedback literacy 
  • Community engagement and co-creation 
  • Pedagogies of hope 
  • Learning through the landcape  

Digital and Data Futures: 

  • Generative AI in teaching and assessment 
  • Digital innovations in teaching and learning 
  • Embedding learning technologies  
  • Accessible and inclusive online teaching 

We’re excited to celebrate and reflect on all the amazing work that goes into teaching, learning and assessment here at Sussex. We hope the festival will appeal to colleagues who would like to share their experiences and reflections at any stage of their projects. To reflect that aim, we’re asking for contributions in a variety of formats. 

Choose your format: 

Work-in-progress lightning talks will last 7 minutes, providing short reflections on current practice, or a pedagogic development you would like to make.  

The 30-minute interactive sessions can be run in any way you’d like; they could be used to demonstrate a new tool or teaching technique, or workshop an idea or challenge with fellow colleagues interested in teaching and learning. 

The 60-minute facilitation slots are open for colleagues to suggest longer workshops and discussions on a dedicated topic. Do you have a ‘wicked problem’ within teaching and learning you’d like to dissect with colleagues in a solution room, a provocation to push our boundaries and thinking on current practice, or a wider theme you would like to explore through a global café? Please let us know on the CFP form- the Academic Developers would be very happy to help you plan and facilitate a session.  

Any of these sessions can be presented in person or online. Please note that the majority of the in-person content will take place at the Student Centre on Wednesday 10th July, and the online content will be hosted on Zoom on Thursday 11th July.  

Please submit your ideas through the Call for Participation Form by Friday 19th April. If you would like a document version of the CFP, or if you have any questions, please contact the team.

Posted in Blog, Uncategorised

Closing the ‘feedback gap’ for international postgraduate students: An embedded writing approach

Dr Martin Brown


Dr. Brown embarked on his teaching journey in the realm of outdoor education during the 1980s before transitioning to teach Geography at the secondary school level. In 2004, he transitioned from the classroom to become an Educational Advisor for Learning and Teaching Scotland (now Education Scotland), collaborating with various national stakeholders. From 2004 to 2010, he served as a National Assessor for both the General Teaching Council for Scotland and the Chartered Teacher Program. Since 2020, Dr. Brown has been teaching international students in ESW.

Since 2020, I have developed two courses for Academic Skills Support for post graduate international students. In the Autumn Term, I teach student essay writing and in Spring Term, I teach dissertation writing. My aim was to close an identified attainment gap in three MA programmes: International Education and Development (MAIED), Education (MAED) and Childhood and Youth (MACY). These courses are ‘embedded’ in so far as they focus on Social Science research, theories, and concepts and aim to demystify essay writing for assessment by aligning seminar teaching with: national learning outcomes, local assessment criteria and tutor feedback. I model different types of language and student voice in: essays, portfolio submissions, and dissertations. Over ten weeks, I endeavour to build relationships through conversations about writing. I focus my efforts on developing conversations from free writing activities, and I offer technical solutions for improving academic writing in a Social Sciences genre. In this space, I aim to move beyond teaching with slides; instead, I focus on peer conversation, student writing, tutor observation, and tutor feedback.

In my writing workshops, I’m constantly aware of a sort of chronic anxiety among international students. Occasionally I think this type of stress is cultural shock but more often it is, in my view, due to “a lack of predictive information” (Sapolsky, 2004). There is an assumption that postgraduate students have fewer problems in negotiating the learning environment in a university, but this is not true for many international students or older students (like me) who return to study after many years of absence (Brown, 2014). Post graduate students may find difficulty where there is a lack of clarity or consistency in assessment guidelines, or a lack of alignment between learning outcomes, assessment rubrics, and assignment titles (Evans, 2013). My workshops are built on conversations with international students, and we talk about their experience of writing for assessment. Critically, my feedback to them includes my reflections on marking student submissions and the most common problems I see. These feedback exchanges influence my seminar planning and allow me to improve formative writing activities that are designed to address common problems.

One of these common problems is accidental plagiarism, otherwise described as weak paraphrasing and poor citation, although often referred to by markers as a lack of author voice. For me, this problem often demonstrates weak English language skills, but it can also be a lack of self-confidence to offer personal opinions. Accidental plagiarism is most often rooted in excessive description and a failure to move from description to discussion, and unfortunately where students are completely unaware that they have adopted another author’s voice. I offer a space to discuss this problem and to strengthen the international student’s voice through reflective writing. I focus on improving student self-confidence by creating a space to discuss ideas of self and positionality. In other words, I address a known problem, that most post graduate students do not know how to describe themselves or their opinions in an essay. Moreover, they do not know how to reflect on their identity, or justify their beliefs and opinions (Holmes, 2020). To help students to think, talk, and write about themselves, I encourage regular free writing because it removes barriers and fosters self-expression and discovery.

These embedded writing courses give me the opportunity to clarify or demystify assignment guidance for students and narrow “the feedback gap” (Evans, 2013). This is the gap between what tutors say or understand about the feedback they offer and what students say or understand about the feedback they have been given. I do this by describing specific assessment criteria and by interpreting the advice tutors offer about their assignment titles. Once essay assignments have been marked by tutors, I interpret the summative feedback students are given by their assessors. Additionally, I advise students on how to edit their essays for resubmission. These courses also give me the space to offer empathetic and directive solutions to a variety of technical problems, such as: how to write a meaningful essay title, how to structure an essay, how to signpost an argument or citation, how to separate the broad context of an essay from the field of research in the literature review (a problem of breadth), how to demonstrate critical and reflective thinking in a paragraph, how to  speak to the marker and when to use ‘I’, and how to edit an essay for better internal alignment and flow.  

Good teaching is labour intensive in so far as it is about collecting and interpreting evidence to inform teaching and feedback exchanges. However, it is also important to create activities that challenge students as well. To that end, in my writing courses, I offer both cognitive and socio-constructive approaches, with clear instructions, creative tasks, and contextual feedback (Evans, 2013, Brown 2014). This feedback should be about four things (but not all at the same time): explaining how to begin and complete a task, activity, or assignment; describing how the student can look forward to the next task and proceed confidently; giving technical advice about ‘self-regulation’ or metacognitive skills; and describing aspects of ‘self’ in terms of the attributes or capacities of the individual student in relation to the task, activity or assignment (Evans 2013 p72). In this way students can become less stressed about assessment and more confident in their ability to reflect on their learning. In other words, tutor and peer feedback is primarily, “ a crucial way to facilitate student’s development as independent learners who are able to monitor and regulate their own learning” (Ferguson, 2011 in Evans 2013 p72). Overall, an embedded writing approach reduces student academic workload, lowers their stress levels and strengthens their self-efficacy.

References:

Carol Evans (2013)  Making sense of feedback in Higher Education, Review of Educational Research vol 83 no 1, 70-120. https://www.jstor.org/stable/41812119

Sally Brown (2014) What are the perceived differences between assessing at master’s level and undergraduate level assessment? Some findings from an NTFS- funded project, Innovations in Education and Teaching International, vol 51 no 3, 265-276. https://doi.org/10.1080/14703297.2013.796713

Andrew Holmes (2020) Researcher Positionality- A Consideration of Its Influence and Place in Qualitative Research- A New Researcher Guide, International Journal of Education, vol 8 no. 4, 1-10.  https://doi.org/10.34293/

Black, P. and William, D. (1998) ‘Assessment and Classroom Learning’, Assessment in Education: Principles, Policy & Practice, vol 5 no1, 7-74.

Sapolsky, Robert, M. (2004) Why Zebras don’t get ulcers. 3rd Edition St Martin’s Press: New York.

Tagged with: , , , , , , ,
Posted in Blog

Strategies for making quantitative assessments more AI resilient

Dr Myrna Hennequin

In this case study, Dr Myrna Hennequin, Lecturer in Economics, shares her strategies for making online quantitative assessments more AI resilient. 

What I did  

I redesigned the online assessments for the module Quantitative Methods for Business and Economics: Maths (QMBE A). This introductory maths module for Foundation Year students is assessed by several Canvas quizzes. My goal was to create computer-markable exams that are varied, balanced and more resilient against academic misconduct through AI or collusion. 

Why I did it   

Creating effective online assessments for quantitative modules poses several challenges. With the rapid rise of AI tools such as ChatGPT, I was concerned about cheating through the use of AI. At the same time, timetabling issues implied that the window during which the in-semester tests were available had to be extended from one hour to four hours, increasing the opportunity for collusion. 

In response to these issues, I aimed to make the quizzes more resilient. As the current free version of ChatGPT (GPT-3.5) does not allow for uploading images, I ensured that each quiz included one or more questions involving graphs.

I also took advantage of Canvas’s built-in maths editor, which puts formulas in LaTeX format. It is currently not possible to directly copy these formulas into ChatGPT, which deters students from quickly copying an entire question including formulas.

Another strategy I use is randomisation. Canvas allows for randomising questions as a whole by using question groups or setting up numerical questions with random numbers using formula questions. To prevent collusion, I ensured that nearly all quiz questions were randomised in some way.

Lastly, I created a varied and balanced exam by using different question types. Even within the MCQ assessment mode, there are many ways to ask questions beyond the standard multiple-choice question consisting of a prompt and a set of answer options. I make use of a range of question types available in Canvas quizzes: 

  • ‘Formula questions’ are numerical questions with randomised numbers. 
  • ‘Fill in multiple blanks’ can be used to let students construct a step-by-step solution to a problem. 
  • ‘Multiple answers’ are multiple-choice questions where there might be more than one correct answer. 
  • ‘Multiple dropdowns’ are essentially fill-in-the-blank questions with predetermined answer options. 
  • ‘Matching’ questions ask students to match a series of statements to a fixed set of answer options. 

Combining various question types makes it possible to test different aspects of learning and get students to demonstrate depth of knowledge in a computer-markable format. 

Challenges 

Designing robust quizzes and AI-proofing questions requires time and creativity. It can be time consuming to plan new approaches and to test how well ChatGPT responds to each type of question. However, the Canvas instructor guide proved very useful as it introduced me to the different possibilities within Canvas quizzes. 

Impact and student feedback 

I always provide practice quizzes containing the same type of questions as the official assessments, so that students know what to expect and can prepare in advance. Students are happy with this combination of summative and formative assessments: on this year’s mid-module evaluation, 21 students (out of 62 respondents) mentioned the practice quizzes and/or the official online tests as a positive point in their written comments. 

In terms of results, the distribution of marks for the assessments this year was very much in line with previous years. This suggests that the marks were not inflated by the use of AI tools or other misconduct. 

Future plans 

While my approach makes the quizzes more robust for the moment, AI tools are rapidly evolving. I would expect AI to get better at dealing with images (e.g. graphs) and mathematical formulas. This will be an additional challenge for the future. 

Top 3 Tips 

  1. Think creatively: can you ask your quiz questions in a different way? Consider making use of the various question types available in Canvas quizzes. 
  1. Include images (e.g. graphs) to make it harder to answer questions using AI (*). Use Canvas’s built-in maths editor for any formulas so that they cannot directly be copied.  
  1. Use randomisation to deter collusion. You can randomise questions as a whole by using question groups, or set up numerical questions with random numbers using formula questions. 

(*) NB: Don’t forget that some students may have visual impairments and require screen readers. You could try to add alt text to the image in such a way that the quiz question can still be answered, but without giving away too much information. 

Tagged with: , , ,
Posted in Case Studies

Student engagement is key for inclusive curriculum (re)design

Katerina Psarikidou and Alejandro Luna

Science Policy Research Unit, University of Sussex Business School 

Katerina Psarikidou is Lecturer in Sustainable Development at the Science Policy Research Unit. She is the University of Sussex Business School PRME Champion, a UN initiative for Responsible Management Education. She is a Fellow of the Higher Education Academy and the Sussex Sustainability Research Programme. She is principal and co-investigator for 3 scholarship and research grants engaging in processes of co-creation with student and citizen communities. Her pedagogic scholarship is published at the international journals of ‘Postdigital Science and Education’ and ‘International Med Education’.  

Alejandro Luna is a Lecturer in Sustainability, Innovation and Energy Policy in the Science Policy Research Unit (SPRU). He is convenor of the MSc course Science and Technology Policy. He is one of the Business School UN PRME Champion Institution Co-Leads. He is investigator in a USBS-funded scholarship grant. Alejandro sits at the Resilience Frontiers Technology Advisory Group (part of the UNFCCC secretariat). He is a Fellow of the Higher Education Academy.  

Introduction

As part of our university lives, we have all encountered ideas of ‘inclusive curriculum design’ as a key objective and imperative of teaching and learning in Higher Education Institutions (HEIs). However, what does it mean? And, what is the role of students in it? 

As early as 2011, Advance HE published its commissioned report ‘Inclusive Curriculum Design in Higher Education’ providing definitions, generic guidance as well as specific to subject or disciplinary area (Morgan and Houghton, 2011). According to this report, “an inclusive curriculum design approach is one that takes into account students’ educational, cultural and social background and experience as well as the presence of any physical or sensory impairment and their mental well-being” (Morgan and Houghton, 2011, p.5). As also highlighted in the report, “it is an imperative on institutions that they design their curriculum in such a way as to promote success among all students” (ibid:5). 

From the above, we understand that students are central to the mission of (re)designing university curriculums in more inclusive ways. However, what needs to be clarified in the above, is the role that students can play in the process of (re)designing curriculums in more inclusive ways.  

Inclusive curricula and co-creation

It is often the case that inclusive curriculums are designed by university educators themselves without engaging students in the formulation of ideas and proposals for more inclusive teaching, learning and assessment practices. As also discussed in relevant scholarship literature, student engagement can be key for student learning – not only in terms of securing the effectiveness of the (re)designed curricula, but also in terms of enhancing students’ belonging to the university, and therefore their overall levels of retention and academic performance (Zhao and Kuh, 2004; O’Keeffe, 2013). 

This was also our experience and understanding from discussions we had with our students in late 2023, as part of our USBS-funded scholarship project titled ‘Co-creating a USBS staff-student community on innovative teaching and learning’. 

In December of 2023, as part of our scholarship grant, we conducted two co-creation workshops with students from two ODL modules that each of us convened and had recently delivered. The ODL MSc Sustainable Development course provides fertile ground for (re)designing curricula in more inclusive ways – both in terms of the diversity of the student cohort coming from all parts of the world, age groups, ethnic, professional and disciplinary backgrounds, but also in terms of the challenges of inclusion that can sometimes emerge in relation to online teaching and learning (Mackenzie et al, 2021). 

As also indicated in the title of our project, the aim of our workshops was to engage students in co-designing innovative methods for teaching, learning and assessments that would put students’ learning needs centre stage.  

In both workshops, students highlighted inclusion as a key objective as well as a challenge for (re)designing the curriculum, especially in online environments. As said:  

I also just wanted to pick up on the interesting on the diversity and inclusion…that was what triggered my thinking around how I think it’s a real challenge when you’re doing online learning” (Student 1, Workshop 2, 2023) 

Students as active agents in their own learning

As students explained, for them, ‘inclusion’ referred to a diversity of both teaching and learning methods that can cater to people’s different learning needs, but also to methods that can make students more active agents in their own learning.   

For example, students from both workshops proposed student-led seminars for case-based learning. They proposed the organisation of cross-module seminars that would be led and delivered by students, in which students could present cases related to their own professional, cultural and disciplinary backgrounds and experiences.  

As students explained, these seminars would also be important for enabling students to advance their knowledge by “putting theory into practice” (Student 1, Workshop 1, 2023). However,  they also stressed the importance of these seminars in creating new more inclusive spaces of learning: both in terms of students being actively involved in running the seminars but also in terms of fostering social interaction and community belonging amongst themselves and to the university. 

However, what students also underlined that they find “most useful” was “conversations like these” (Student 2, Workshop 1, 2023) facilitated by our workshops, and thus processes in which students feel included in discussions about (re)designing curriculum in more inclusive ways.   

Thus, as evidenced in our workshops, questions and aspects of inclusion have been key in students’ discussions as well as their proposals going forward. Students talked about inclusion as a key challenge, especially in the context of online teaching and learning, but also as a key solution to enhancing their own student experience and learning. As also evidenced above, for them, (re)designing the curriculum in ways that would support the development of student-centred as well as student-led learning communities is key for supporting their learning in more inclusive ways. However, for students what is also important and would like to see more of is their engagement in processes of (re)designing the curriculum, in ways that have been done through our workshops! 

Conclusion

The discussion above raises some broader questions about inclusive curriculum design. From our co-creation workshops, we learned that inclusive curriculum design is not just about methods of teaching in class and how to make them more inclusive. There is also a lot of learning that takes place outside the class, and it is for us, as educators, to support student learning by facilitating the development of those student-led communities of learning. Finally and very importantly, it is also very much about how to make processes of curriculum design more inclusive in order to design methods of teaching and learning that are truly inclusive. And, here student engagement is key. And, as students commented, our workshops can be an example of how this can be done! 

References: 

  • MacKenzie A, Bacalja A, Annamali D, Panaretou A, Girme P, Cutajar M, Abegglen S, Evens M, Neuhaus F, Wilson K, Psarikidou K, Koole M, Hrastinski S, Sturm S, Adachi C, others (2022) Dissolving the dichotomies between online and campus-based teaching: A collective response to the manifesto for teaching online, Postdigital Science and Education, 4(2): 271-329.  
  • Morgan, H. and Houghton, A-M. (2011) Inclusive curriculum design in higher education: considerations for effective practices across and within subject areas, Advanced HE, available online at https://s3.eu-west-2.amazonaws.com/assets.creode.advancehe-document-manager/documents/hea/private/resources/introduction_and_overview_1568037036.pdf . Retrieved on 8 February 2024.  
  • O’Keefe, P. (2013). A Sense of Belonging: Improving Student Retention, College Student Journal, 47 (4): 605-613.  
  • Zhao, C.M. and Kuh, G.D. (2004). Adding Value: Learning Communities and Student Engagement. Research in Higher Education 45, 115–138. 
Tagged with: , , , , ,
Posted in Blog, Uncategorised

Using Vevox in the classroom

Dr Seun Osituyo

In this case study, Seun Osituyo, Deputy Director of Student Experience and Director of MSc Accounting and Finance at USBS, explains how she uses Vevox to increase student engagement. 

What I did 

One of the tools I have used to promote active participation during my teaching is Vevox. I set multiple choice questions on Vevox, based on a previous lecture or seminar. Then students are provided with the meeting ID and asked to attempt the questions on Vevox at the beginning of the lecture or seminar.  

Why I did it 

I am always interested in improving student engagement during my lectures and seminars. Many students have smartphones, and I thought it would be useful to engage them using the device that they are already familiar with. Some students also bring other devices such as laptops and tablets to the classroom. Students can access the Vevox site with these devices and do not need to create an account to use the tool. One benefit of Vevox is that it allows anonymity. Students’ responses on Vevox can be completely anonymous (unless they choose to indicate their names in their responses). Vevox is incredibly helpful when delivering a module for the first time as it allows me to check that students have understood the concept(s) I am teaching. I also use Vevox to collect informal feedback on my teaching delivery. This feedback helped me know, for example, that I was speaking too fast during the first lecture and that helped me improve in other lectures I had with the same cohort. Many students are motivated when they are listened to and using Vevox helped me to build a comfortable environment for students to achieve the learning outcomes. 

Challenges 

I did not encounter any challenges when using Vevox. The process was really simple–I only had to create the quiz and share the session ID with students to access it during the lecture. 

Impact and student feedback 

Students who participated in the Vevox exercises found them useful. One student mentioned Vevox in their comment on the best aspects of the module, explaining that they enjoyed being given formative assessments regularly. I used Vevox every week on that module. This student wrote: “The way that the lectures are taught like workshops and how there is a test in the seminar. This motivates you to go away and learn the lecture content ready for the test in the seminar.”  

Future plans 

I have continued using Vevox to check students’ understanding of concepts. In my most recent module, I used Vevox every week to assess students’ understanding of pre-lecture videos during the synchronous lecture in one of my modules. This helped me identify the areas in the pre-lecture videos that need to be revisited. In terms of the teaching delivery, it helps me to know what works for the students. I will continue to use this tool and others, where it is practical to do so. 

Top 3 tips 

  • Give students instructions about how to access and complete the quiz in advance. Ideally, you should add the Session ID to your lecture PPT slides and publish it before the lecture. 
  • Give students a few minutes when accessing the Vevox for the first time. 
  • This should be obvious but explaining the purpose of taking the quiz often improves engagement. 
Tagged with: , , , ,
Posted in Case Studies, Uncategorised

About this blog

Learning Matters provides a space for multiple and diverse forms of writing about teaching and learning at Sussex. We welcome contributions from staff as well as external collaborators. All submissions are assigned to a reviewer who will get in touch to discuss next steps.