Assessment in a world of Generative AI: What might we lose?

Dr Verona Ní Drisceoil, Reader in Legal Education, Sussex Law School

Introduction

For the most part, assessment in higher education is viewed in the negative as opposed to the positive. It is something to be endured, worked through, marked and managed. Assessment causes significant anxiety and stress for students and staff alike. Amid dealing with a cost-of-living crisis and increased mental health challenges, students are under significant pressure to achieve a ‘good degree’ to be able to progress to further study or work. For teachers and professional service staff, the pressure to mark and process hundreds of submissions in short timeframes makes assessment an incredibly challenging period. In addition, most higher education institutions in the United Kingdom score poorly on assessment and feedback in the National Student Survey (NSS), making assessment a major headache for university management teams (see also Harkin et al, 2022). And on top of all of that, we now have generative AI to contend with and the challenges that brings for assessment.

In this short article, I want to offer some reflections, and provocations, on the framing of assessment in higher education. Specifically, in thinking about what might be lost in the new reality of generative AI, I propose that we should think about, and indeed frame, assessment from the perspective of empowerment. In this respect, I advocate for assessment as, and for, empowerment. In an ideal world our assessments, notwithstanding the pressures mentioned above, should empower students, or at least have the potential to empower. They should encourage our students to be authentic, to show agency, to grow in confidence, and develop a range of transferable skills including, in particular, evaluative judgement (See case study assessment example). In this regard, assessment as empowerment as a conceptual frame builds on “assessment for learning” (Sambell, McDowell & Montgomery, 2013) and “assessment for social justice” (McArthur, 2023). Could assessment designed for empowerment, I ask, make life better for everyone; for teachers, for students and even our NSS scores?

The article will begin by reflecting on ‘assessment’ and ‘empowerment’ in a world with generative AI before then offering some initial thoughts on what assessment as, and for, empowerment might look like. I will conclude with some takeaway questions that might help us to think about assessment more meaningfully as we navigate the impact of generative AI on education and on life more generally. In contrast to the dominant narratives circulating on generative AI (on productivity, on efficiency, on saving time and making money), I question what might be lost in all of this in terms of humanity (see further Chamorro-Premuzic (2023) and strongly encourage a deeper questioning and critique of generative AI and what it means for future generations. This does not mean I am a Luddite and afraid to engage with AI (yes, I know AI is here to stay!) but rather that I want to maintain a critical standpoint. I want to focus on value – and what matters in life and in education. How is AI changing our lives, values, and fundamental ways of being? Drawing on the work of Bearman et al. (2024:1), I too argue that in an educational context we have a collective responsibility to ensure that humans (our students) do not relinquish their roles as arbiters of quality and truth.

Reflecting on the value of assessment

Without question, the ever-expanding presence of generative AI has challenged us as teachers and educators. It has made us uncomfortable. We no longer, to quote Dave Cormier, have the same power in assessment. This is unsettling. The presence of generative AI forces us as teachers to reflect on our roles, on how we design teaching and how we design assessment. It forces us, if we take the opportunity, to self-assess and ask: what is the value of assessment? (See further McArthur, 2023; Cormier, 2023) What do we value as teachers and what skills do we want our students to achieve through assessment? How can we empower our students through assessment? This questioning, I argue, should not start with how to design an assessment that will beat generative AI. For me, that is not a pedagogically sound start point. Moreover, it’s completely pointless as “the frontier moves so fast” (Dawson, 2024). I would encourage all teachers to stop for a moment to think about what you really want students in your module/discipline to leave university with? This is a great opportunity to really reflect on that question and to go back to basics as it were. Not every module should, I argue, be trying to teach everything to, and with, generative AI in mind. That, for me, is a dangerous path to take. As academics and critical scholars – in universities – let us remain critical. Ask questions of the impact of using these models, of the impact on humanity and the impact on learning through process and doing. Challenge the status quo. Who is driving the AI narrative and why? Why should we care as educators?

Empowerment in Assessment

According to the Oxford English Dictionary, empowerment speaks to agency, autonomy, and confidence. It notes that empowerment is “the fact or action of acquiring more control over one’s life or circumstances.” To empower someone is “to give (a person) the means, ability, or strength to do something”; to enable them. For me, assessment should be about offering a space for growth, to develop skills through process, to feel empowered through doing. This concept of assessment for empowerment can be seen to build on the concept of assessment for learning mentioned earlier. For Sambell et al. (2013), assessment for learning is focused on the promotion of effective learning. They note that, “what we assess and how we assess indicates what we value in learning – the subject content, skills, qualities and so on.” (Sambell et al. 2013:8). Assessment then should promote positive, and empowering messages about the type of learning we require (Sambell et al. 2013:11). It is focused on process and not just product.

As I write this piece, I am wondering whether one should even mention empowerment in the same sentence as generative AI. Perhaps one should. Some argue that using generative AI is empowering. Many have noted that generative AI helps you get started, it builds confidence etc. Quite literally, these large language models put words on the page – and very quickly at that. It is hard not to be tempted by these tools. We have also heard that many students (36% in a survey of 1200 students) use generative AI as a personal tutor (HEPI, 2023). One might argue that there is agency and growth here. Perhaps. But is it on a superficial, artificial level? Is it a matter of degree? Does it matter? I think it does and should.

Yes, generative AI may save a great deal of time on a task and to produce text, to put words on the page that speeds up a process but then where is the learning by doing? Is the purpose of assessment, of higher education to now support students to be able to use a generative language learning model with no authenticity, and questionable accuracy. Generative AI platforms are not truth machines, they hallucinate. If we adopt a view and approach that positively embraces generative AI (allows use of generative AI to produce an output), we are valuing the product and not the process. Adopting such an approach does not guarantee that students will develop and enhance the higher order thinking skills we should value so much in higher education. Chamorro-Premuzic (2023: 4) reminds us that generative AI could dramatically diminish our intellectual and social curiosity and discourage us from asking questions. We still need to teach our students key processes and knowledge and equip them with the ability and skills to be able to critique, evaluate and judge outputs. As Bearman et al. (2024:1) note, “university graduates should be able to effectively deploy the disciplinary knowledge gained within their degrees to distinguish trustworthy insights from those that are ‘hallucinatory’ and incorrect”.

But generative AI levels the playing field…

I have some sympathy for the argument that generative AI helps “level the playing field” in higher education. Specifically, that it helps students whose first language is not English to be able to access and understand teaching materials and assessments. However, I am still unconvinced this is a sufficient rationale to positively embrace generative AI tools in teaching and assessment without deeper critique and questioning of what may be lost pedagogically. If anything, this framing highlights how poor we are at supporting students whose first language is not English in (UK) universities. It could be argued that embracing generative AI to the extent being advocated allows universities to shirk certain responsibilities. Does a positive embracing of generative AI allow us to gloss over certain areas that we have failed at, as a sector e.g. supporting language proficiency in a meaningful way?

Assessment as Empowerment in a world with generative AI – some thoughts

So, what does assessment as empowerment look like in a world with generative AI? Is it even possible? I hope so. The following provides a few thoughts on where we might focus the discussion to achieving assessments that empower our students to continue to be arbiters of truth whilst developing a range of transferable and empowering skills:

  1. Value + programme level assessment: I suggest, we need to start this conversation from the perspective of value – what is the value of assessment – and then have joint, but critical, conversations at programme level. I am concerned about siloed responses to generative AI without wider department and school level conversations. For some departments and schools, it might be essential to embrace generative AI tools across teaching and in assessment (e.g. in computer science) but for others (e.g. law, my discipline), it is, I argue, essential that we now have programme level conversations about the value of assessment and what skills, literacies and practices we want our students to take away from their degree experience. Should our students decide to go into legal practice, what skills would society wish our future lawyers to have? They will still need to be able to write well, formulate an argument, show attention to detail, detect accuracy and truth in text sources, orally convince, persuade, and advocate. If you have not formulated your own argument, it will never be as persuasive as one self-generated. Generative AI, by removing the process of formulating an argument, robs one of the opportunities to truly develop persuasive advocacy skills. I do not believe that wider society would wish that legal advice be generated by AI. And even if it was, they would want someone to have the skills and evaluative judgement to know what is correct and not simply an hallucination. There is, to quote Bearman et al. (2024), an urgent need, in this new reality, to develop students’ evaluative judgement. Students need to be able to judge quality work of thy self and others.
  2. Oracy/Oral based assessments/mini-Vivas: One type of assessment I would like to see much more engagement with is oral based assessment. Perhaps, now is the time to embrace that. Voice work, and the ability to present confidently is such an important skill and one that I think we should place a much higher value on. We neglect this form of assessment in higher education in favour of written based assessments. This, I argue, is also problematic in terms of helping all students to build social capital. Within my own law module, my move from a traditional ‘write a case-note law essay’ to an oral case briefing assignment (with a ‘you are a trainee solicitor’ positioning) was based on a desire to ensure all students in my module had the opportunity to develop voice work skills within the module and not just in extra-curricular activities as can often be the case in law schools. I argue that maintaining these activities as ‘extra’ results in only those in more privileged positions being able to take part. Students who work, commute, or care for others are often excluded from these activities. That needs to change. Now might be the chance to do so. For those that argue that this is not possible at scale, I disagree. I have been able to implement the development of voice work skills into a core module of 350+. Yes, it is a challenge but with good module design and a good teaching team it can be achieved. 
  3. In person open book exams: Whilst I appreciate there are valid arguments about the problematic nature of in person exams in terms of stress and anxiety, and that they do not reflect real work practices, do in person open book exams offer a compromise and a way forward? Academic integrity scholars (See Philip Dawson, Cath Ellis) certainly argue that in person exams should be in the mix of programme level assessment. I wonder too if part of the resistance to go back to in person assessment is more about cost than an absolute commitment to accessibility and inclusion. Reasonable adjustments can still apply to in person assessments. They did before the pandemic and can again.
  4. Use more bespoke rubrics: Another response in the short term might be to add more bespoke marking rubrics for assessments. This might mean not adding any weighting for structure, grammar and syntax but adding a much higher weighting for other elements to reward process and engagement with materials, accuracy (on the law for example) and sources that are not merely artificial. I have used a bespoke weighted rubric in my module for the last two years and it works very well. Linked to points made above, there is also a section for evaluative judgement within that rubric. As part of the case briefing assessment, students are asked to provide an evaluative judgement post presentation as they would to their ‘supervising solicitor’ in a legal practice setting.

Takeaway questions

  1. What is the value of learning?
  2. What is the value of assessment?
  3. What skills, literacies and practices do you want your students to take away from your module and through your assessment?
  4. How can you design your assessment to empower your students; to help your students to be authentic, to grow and to develop confidence through doing and being?
  5. Think about process not just product.
  6. What role can evaluative judgment play in your teaching and assessment design?

Conclusion

For me, it is essential that universities and educators continue to approach the question and impact of generative AI in education from a critical standpoint. Let us, I advocate, remain critical. Ask questions. Challenge. What might be lost? Who is driving the AI narrative, and why? Why should we care as educators? What are we complicit in? As I have stated above, always consider the structures within which these shifts are happening. The dominant narrative and retort of “AI is here to stay so get over it” is not enough of a reason to not ask questions of what might be lost here. It is appreciated that universities are businesses too and it might seem neglectful, within a business model, not to approach this debate from the perspective of ‘let us not be left behind’ but we have a collective responsibility to be more critical, to question and to challenge the narrative. Given the rate of change in this area, it is incumbent on us as educators, to ask what might be lost in terms of truth, justice, humanity, and real connection. As Esther Perel reminds us, the AI we should be most concerned with is artificial intimacy – that lack of real connection and authenticity in how we show up in the world because of the negative impacts of technology. In a world of hyper-connectivity, we are often not connected at all. I can, as I am sure others can, attest to the negative impact technology has had on my life in terms of being present and meaningfully connected. At a time in higher education where mental health issues are at an all-time high, and confidence and lack of community and belonging are at a low point post pandemic, is a world with generative AI, I ask, going to have a positive or negative impact on connections, relationships, authenticity, truth, and humanity? We are, as human beings, wired for real connection – and hopefully authenticity as well, with all the flaws and vulnerability that come with that. We are not robots. If we embrace generative AI to the extent that is being encouraged by corporations, influencers and even universities, what might we lose?

Resources

Ajjawi, R., et al. (2023). From authentic assessment to authenticity in assessment: Broadening perspectives. Assessment and Evaluation in Higher Education, 1-12. [Online].

Arnold, L. (n.d.). Retrieved from https://lydia-arnold.com/

BBC News, (2023, 27 May) ChatGPT: US lawyer admits using AI for case research. Retrieved from https://www.bbc.co.uk/news/world-us-canada-65735769#

Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 1–13. Retrieved from https://doi.org/10.1080/02602938.2024.2335321

Chamorro-Premuzic, T., (2023) I, Human: AI, Automation, and the Quest to Reclaim What Makes us Unique. Harvard Business Review Press.

Compton, M. (2024, 16 April) Nuancing the discussions around GenAI in HE. Retrieved from https://mcompton.uk/ 

Cormier, D. (2023) ‘10 Minute Chat on Generative AI’. Retrieved from https://vimeo.com/866563584 [See series with Tim Fawns: https://www.monash.edu/learning-teaching/TeachHQ/Teaching-practices/artificial-intelligence/10-minute-chats-on-generative-ai]

Dawson, P. How to Stop Cheating. Retrieved from https://youtu.be/LNcuAmDP2cQ

Dawson, P. (2021) Defending Assessment Security in a Digital World. Routledge.

Freeman, J. (2024, February 1). Provide or punish? Students’ views on generative AI in higher education (HEPI Policy Note 51).

Harkin, et al. (2022). Student experiences of assessment and feedback in the National Student Survey: An analysis of student written responses with pedagogical implications. International Journal of Management and Applied Research, 9(2). Retrieved from https://ijmar.org/v9n2/22-006.pdf

McArthur, J. (2018). Assessment for Social Justice: Perspectives and Practices Within Higher Education. Bloomsbury Publishing Plc.

McArthur, J. (2023). Rethinking authentic assessment: Work, well-being, and society. Higher Education, 85(1), 85.

Tai, J., et al. (2023). Assessment for inclusion: Rethinking contemporary strategies in assessment design. Higher Education Research and Development, 42, 483.

Tagged with: , , , ,
Posted in Articles
One comment on “Assessment in a world of Generative AI: What might we lose?
  1. tannu says:

    Dr. Verona Ní Drisceoil’s reflections on assessment in higher education amidst the rise of generative AI are thought-provoking. Framing assessment as empowerment, she challenges us to reconsider the purpose and value of assessment in fostering authentic learning experiences.

2 Pings/Trackbacks for "Assessment in a world of Generative AI: What might we lose?"
  1. […] challenges of doing so on a module-by-module basis.  In her recent article for Learning Matters on Assessment in a World of Generative AI: What Might we Lose?, Verona Ní Drisceoil, Reader in Legal Education, reflects further on insights and concerns […]

  2. […] Dr Verona Ni Drisceoil shared fascinating insights on assessment using AI on EE’s other blog, Learning […]

Leave a Reply

Your email address will not be published. Required fields are marked *

*