by Dr Sam Hemsley, Academic Developer
What’s new at Sussex?
In June, Microsoft CoPilot was made available to all staff and students. We also completed a trial of Jamworks, an AI notetaking and study aid application. Read Helen Morley’s blog of 6 June for an overview.
Reflections on AI the University of Sussex Education Festival
We in Educational Enhancement have spent the past few days reflecting on the many brilliant insights into approaches to teaching, assessing and supporting our students here at Sussex.
A key theme that came out of the sessions with an AI focus was the need to support students in developing their evaluative judgment: their ability to make independent judgements about the quality of their own work, and the work of others. Verona Ní Drisceoil (Reader in Legal Education) spoke about her approach to building explicitly students’ evaluative judgments skills. Verona drew on Bearman et al’s 2024 paper, ‘Developing evaluative judgement for a time of generative artificial intelligence’ to consider the intersection with critical AI literacies.
We heard a similar call from recent graduates Aaron Fowler and Max Bayliss who took a morning away from training University of Sussex Business School academics in how GPT4 can and is being used by students, to kick off day two of the conference. Max and Aaron challenged us to recognise that, while some students are automating their responses to assessment tasks using generative AI, some are also using AI to augment their work. When augmenting, AI becomes an extension of learning and a means to speed up knowledge acquisition, leaving more time for higher-level thinking like evaluative judgement and critical thinking. Aaron and Max then went on to suggest flipping Bloom’s Taxonomy on its head to embed such higher-level skills earlier in the curriculum.
Another really interesting question which emerged over the conference was whether universities should focus on providing bespoke (and therefore constrained but ‘managed’) AI study support tools for students, such as Jamworks or Plato; or, whether they should go all-in on training students (and staff) how to create their own, e.g.: using GPT4 Enterprise. Also, how long should we wait to make a decision!? A question we will return to many times, I’m sure.
More conference highlights will follow in an EE blog post coming soon.
Updates from the sector
Here’s a summary of the things that caught our eye over the last few weeks.
GenAI continues to evolve
Towards the end of June, Anthropic launched Claude 3.5 Sonnet a new free generative AI tool which (deep breath):
“sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions and is exceptional at writing high-quality content with a natural, relatable tone.”
This followed hot on the heels of OpenAI’s release of a free version of ChatGPT 4 and ChatGPT4-o (no – I don’t really understand the version naming conventions either). In short, we all now have free access to much of the GPT4 functionality that had previously been behind a paywall.
Proof (if we needed it) that GenAI is hard to spot
At the end of June, various media reported on the Reading University researchers who “fooled” university markers with AI-generated exam papers” . You heard it here first though (the pre-print has been in circulation and on the AI CoP Padlet since October 2023).
Responding in the Guardian, Prof Elizabeth McCrum, Reading’s Pro Vice-Chancellor for Education, reported that Reading was moving away from online exams and developing alternatives that would include applying knowledge in “real-life, often workplace related” settings stating:
“Some assessments will support students in using AI. Teaching them to use it critically and ethically; developing their AI literacy and equipping them with necessary skills for the modern workplace. Other assessments will be completed without the use of AI.”
The ‘how’ implicit in McCrum’s final sentence remains elusive however.
Should we rethink cheating, and/or design it out?
Martin Compton’s Heducationalist blog happens to offer some views on this ‘how’ or, at the very least, some robust challenges to the sector and a very stretched Ghostbusters’ metaphor. Compton made the argument that:
“Academic integrity as a goal is fine but too often connotes protected knowledge, archaic practices, inflexible standards and a resistance to evolution’ and that AI thus represents a catalyst, rather than a reason, for dramatic change”.
*Spoiler: he means change towards innovative approaches and assessment for learning – not back to in-person exams and tighter regulation of more traditional assessment approaches.
We’ll leave you to follow up independently on the Ghostbusters part of his argument.
Teach students to use AI to think with them, not for them
June also saw the publication of a HEPI blog which provided (we thought) a useful framing for curriculum innovation in an AI world, and one which chimes with the view of our graduate speakers at last week’s Education Festival. The post summarises the outcomes of a collaborative round table of four HE leaders who seek to shift the emphasis from detection of AI misuse to recognising the place AI will have in both education and the future working lives of today’s students.
The solutions shared provided more of the ‘how’ in relation to stress testing existing assessments, from the University of Greenwich and Imperial College London, which are being used to evaluate risk while also driving educator AI literacy. Glasgow have gone further and provided staff with a reflective survey and dashboard to map their practice against their institutional assessment framework, stating:
“We are trying to set a context where we think very carefully about assessment design from the outset and ask, are we over-assessing? Are we making sure that the assessment we design for students is connected to their learning and connected to skills? Are those skills really surfaced through the work that they’re doing?” (Professor Moira Fischbacher-Smith, Vice-Principal (Learning and Teaching, University of Glasgow)
So, in summary, the consensus in the sector remains one of critical engagement with AI, recognition there is no silver bullet for assessment assurance and, in general, a reluctance simply to revert to in-person exams.
Use AI to make learning fun, and impactful!
Lest we forget – generative AI is also a rather wonderful tool that we can all use as educators, as we found at this year’s Playful Learning Conference.
Using a game of ‘guess the prompt’ Daisy Abbott managed to get delegates thinking around how Generative AI works and where it can create huge inaccuracies, this led to a fantastic discussion of how we communicate the problems of AI to students and academics. See, for example, the image below. Can you guess what prompt created this image and what’s wrong with it?
Upcoming events
The most recent University of Kent ‘Digitally Enhanced Education webinars’ took place on 17 July, on the theme ‘How Best to Engage Our Students in 2024/2025’. It includes three 15 minute talks with an AI focus. Films of the presentations should be available soon on their YouTube channel.
You can sign up now to the Intro to Generative AI in teaching workshops running in September.
As ever if you need support for your teaching at Sussex, then get in touch with your Learning Technologist or Academic Developer for support.