Episode 13: Generative AI and Higher Education

Our guests: Dr Paul Robert Gilbert and Dr Niel De Beaudrap, with producer Simon Overton and hosts Dr Heather Taylor and Prof Wendy Garnham.

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our thirteenth episode is ‘generative AI and higher education’ and we hear from Dr Paul Robert Gilbert (Reader in Development, Justice and Inequality) and Dr Niel De Beaudrap (Assistant Professor in Theoretical Computer Science). 

Recording

Listen to episode 13 on Spotify

Transcript

Wendy Garnham: Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Generative AI and Higher Education. Our guests are Dr Paul Gilbert, Reader in Development, Justice, and Inequality in Anthropology and Dr Niel De Beaudrap, Assistant Professor in Theoretical Computer Science in Informatics. Our names are Wendy Garnham and Heather Taylor and we are your presenters today. Welcome everyone. 

All:  Hello. 

Heather Taylor: Paul, could you start by telling us a little about what you teach and how you see generative AI changing the way students approach your teaching? 

Paul Gilbert: So, I’m based in Anthropology, but I actually primarily teach in International Development. And my teaching is divided into three areas, which sort of reflects my research interests. I teach a secondyear module on Critical Approaches to Development Economics. I teach a thirdyear module called Education, Justice, and Liberation, and I’m the coconvener with Will Locke of the new BA Climate Justice, Sustainability and Development. I also do some teaching from first year on Climate Justice. So, honestly, in terms of how students are approaching the teaching, apart from a couple of cases where I suspect there may have been AI involvement in some assignments, I haven’t noticed a huge difference. 

What is new, and maybe why I haven’t noticed that difference, is that, like a lot of my colleagues in International Development, we started engaging headon with AI and integrating it into what we teach about and how we talk to our students. For example, my secondyear development economics course is deliberately structured around global South perspectives on development economics and thinking that isn’t dominant. Economics is odd among the Social Sciences. There’s a lot of research on this. Sociologists like Marion Fourcade have looked at the relative closure of economics, the hierarchy of citations, its isolation from other disciplines, and the concentration of EuroAmerican scholarship. Professor Andy McKay in the Business School has also done work on the underrepresentation of African scholars in studies of African economic development, and on the tendency for scholars from the global South to be rejected disproportionately from global North journals.  And so the reason that matters for thinking about teaching and AI is because AI, so large language models. Right? ChatGPT and Claude and everything, they’re basically fancy predictive text. Right? Emily Bender calls them synthetic text extruders. Right? You put some goop in and it sprays something out that sometimes looks like sensible language. 

But it does that based on its training corpus, and its training corpus is scraped from somewhere on the Internet. And it also has various likelihood functions within the large language model that make sure the most probable next word in the sentence comes, right, so that it seems sensible. And what that does is reproduce the most probable answer to an economic question, which is the most dominant one, which happens to be not the only one, but one from a very, very narrow school of thought that has come to dominate economics and popular economics and so on. And so all the kind of, minoritised perspectives, the ones that don’t make it into these, like, extremely hierarchically structured top tier journals, the ones that aren’t produced by Euro American scholars, you’re not going to get AI answering questions about them. And if you use it to do literature searches, it’s not going to tell you about them. 

So, that module is built around those perspectives. And so we kind of integrate that into the teaching as a way to highlight to students that large language models function in a way that effectively perpetuates epistemicide. Right? By making sure that it reproduces the most likely set of ideas and the most likely set is always the dominant set, right, from whoever is most networked and most online, then perspectives that have already been disproportionately not listened to get less and less visible. So structuring the course around precisely those not so visible but really important and really consequential approaches to economic development forces students not only to not use it to do to do their literature searches, but to think about the kind of the politics of AI knowledge production. 

Heather Taylor (04:57): That’s fantastic. I mean, we have, so we both teach in psychology, and we’ve got a problem in psychology where it’s all very Western dominated. You know, we use the DSM 5 largely, which is American, as like the manual for diagnosing, you know, psychiatric disorders and so on. I teach clinical mainly, but, you know, it’s generally, there’s a Western dominance.  And that would actually be a really interesting way to get them to think about where are these other perspectives and how like you’re saying, there’s sort of a moral question of using ChatGPT because it’s regurgitating stuff that’s overshadowing, you know, other important voices that aren’t being heard. 

Paul Gilbert: Yeah. And it’s also an opportunity to get students to think more about data infrastructures. Right? Because whether you use ChatGPT or Claude or Lama or whatever or not, I think we all could do with understanding a bit better how search works, how data infrastructures are put together, where things are findable or not. Right? 

So centring that means there’s a chance to reflect on that. And it’s not, you know, I’m not a big fan of AI for various reasons. We might go into it later. But rather than just standing up and going, this is awful, you can say, look, here are all these perspectives that are really important. We’re going to get to grips with them in this course, this is not what you will get if you ask ChatGPT for an answer. And it’s also part of a way of reminding students that they are smarter than ChatGPT, right? And I think that’s really important because there is some research, I think some people in Monash University in Australia and elsewhere looking at the kind of demoralising effects of people, so I think what’s the point of even trying? Right? 

I can do this and it can spew out something as good as what I can do. And it can’t. Right? And it can’t if you structure your questions right and if you get your students to think about its limitations. I don’t want to talk too much, but, just one thing that’s worth saying, and, Simon knows about this as well because he did some videos for us, but one of the ways that I try and get this across to the students is that we have this big, special collection in the library, the British Library for Development Studies legacy collection, and it’s full of printed material produced by scholars from Africa, Asia, Latin America in the sixties, seventies, eighties. A lot of it doesn’t exist in digital form. Right? It hasn’t been published as an eBook. You might find a scan somewhere.  Right? Some of that knowledge doesn’t even turn up in the Sussex Library Search. Right? But it’s there in the stacks or in the folders and, you know, there’s a huge amount of insight and history captured in there that the Internet doesn’t know about. Right? Which means ChatGPT will never know about. And, you know, yet that reminds us that there are limitations even to Google searches and Google Scholar and everything, but, you know, the problems of LLMs are that on steroids. Right? So using that as a chance to show people why offline materials and stuff that isn’t widely disseminated or available open online isn’t, you know, to be discounted, and there’s a lot of valuable knowledge in there. 

Heather Taylor (08:07): So same question to you then, Niel. Could you start by telling us a bit about what you teach and how, sorry, how you see generative AI changing the way students approach your teaching? 

Niel De Beaudrap: Alright. Well, I teach a couple of modules in informatics. And informatics, for those of you who don’t know this term, it’s another term for computer science. But, I end up teaching the introductory mathematics module for computer science where we try to introduce to them all the basic mathematical concepts. That’s the name of the module, Mathematical Concepts, that they might need throughout their career.  And we’re not going to cover absolutely everything when we do that, but we try to cover a lot of ground with a lot of different ideas and how they connect to one another. And I also teach a master’s module in my own research specialty, which is quantum computation. And while these two things might seem a little bit different from one another, the fact that quantum computation, so setting aside sort of any excitement that might come with that, it’s something which you can only really come to grips with if you have a handle on a large collection of mathematical concepts. So, similarly to Paul, I don’t really know precisely how it would affect how they engage with either of these modules. I see some signs possibly for, what my modules were, the assessments that maybe, you know, the strange inconsistency sometimes with which the students, not only answer questions, but whether they are able to answer a question well or not, even if they’re very similar questions, sometimes has me scratching my head, but Idon’t try to infer anything about that. 

But it’s sometimes signs about, you know, basically, when I wonder whether or not they have used a large language model in order to generate answers, it’s a question of basically how they are trying to engage with the subject in general. And, you know, if they’re relying on other tools to try to, basically learn about the subject matter. It’s very good if students try to use other materials, other sources from which to learn about a subject. But there’s the question of, you know, how will they can sort of judge the materials that they choose to learn from. 

I try to curate the approach in which, as I, you know, all teachers do, all lecturers do, try to curate a particular perspective, a particular approach from which they can learn about a subject. And if they use alternative sources, this is good. You know, a very good student can do that or somebody who might happen to find my explanation a bit puzzling. It’s much better that they look for other solutions, other approaches to learn than just sort of struggle along with my strange way of looking at things. But, ultimately, it does require that they be able to sort of critically evaluate what it is that they’ve been shown. You know, if they do use AI, if they do use a large language model and its explanations, first of all, they’re going to come across basically the popular presentation, which not only might be biased, but for a complicated subject might simply be wrong. It might lean very heavily on very, very reductive, very simplified presentations that you might see, for instance, in a popular science magazine, which is the bane of quite a few of my colleagues and me in particular. The fact that these things get collapsed, the fact that they get just flattened down into something where you have something that sounds like a plausible sound of words, a plausible thing that somebody can say but actually has no explanatory power. Not only is it sort of not the full answer, it’s that you can’t use it in order to understand it, but the students, whether or not they can recognize something like that, is something that I worry about, for mathematical concepts of the more elementary stuff, the more elementary module that I teach. So it’s possible that they might use a large language model to try to generate examples of something or another, but there’s the question of how reliably the large language model is generating things. Like, I actually haven’t kept on track of just how good language models are doing arithmetic, but if they can’t reliably sort of do any mathematics, then how are they going to learn anything from the large language model, let alone anything which has got more complicated structure, from which, you know, they’re trying to learn this slightly nuanced thing, even if it’s at the most elementary levels. The only way that you really get to grips with these techniques is by encountering it yourself, and, you know, so a large language model maybe, is going to be a little bit more successful at generating elementary examples of things because there are certainly mathematics textbooks aplenty, many examples from which you can draw some sort of corpus of how you explain a basic concept, but if they’re being sort of chopped and changed, how do you know that you’re going to get some sort of a consistent answer? This is the thing that I’m concerned with. I don’t know precisely how often it comes up for my students, but I do hear them just sort of casually referring that they will look up something by asking ChatGPT. It makes me wonder what the quality of information that they’re getting out of it is. 

Wendy Garnham (13:11): I suppose that’s similar to the Coding in R that we see in psychology students, so they will often resort to using ChatGPT, and quite often it gets it wrong.   

Heather Taylor: And then they get flagged. They’re very good at flagging it in the research.  The research methods team are very good at identifying when AI did it, basically. But, yeah, I’m assuming because it just does things a weird way. You know? Yeah. 

Paul Gilbert: LLMs are astonishingly bad arithmetic. Like, almost amusingly bad. I think they can kind of cope up to, like, two, three figures and then you start, because it’s again, it’s predictive text. It can’t do maths. And I think one of the, like, really important things about bringing this into the classroom is, you know, part of what we mentioned earlier about, you know, students might lose confidence or lose motivation because they think, oh, ChatGPT can do it, I’ll just look it up on there. But there’s a lot of things that it’s super dumb at. Right? And I don’t think we should shy away from saying it’s really dumb at a lot of things, and we shouldn’t rely on it, and think about why that is. And it’s because something that is good at predicting the likely next word based on the specific set of words that it was trained on can’t do a novel mathematical problem, and there’s no reason why you would think it should. And there’s so much hype that equates large language models with human cognition that people seem willing to accept it can do a whole bunch of things that it can’t. Right? Even down to really trivial, sort of slips, like assuming it’s a search engine. Right? But it’s stuck in time. It can only answer things in relation to its training data. It’s not something that can actually search current affairs. Right? And that is something that is kind of surprising both how some students and some colleagues aren’t fully aware of that, which I think needs to be, like, the super basic minimal starting point is that people need to understand the data infrastructures that they’re messing with and what they can actually do and not do. 

Heather Taylor (15:18): You know, it’s really agreeable as well. So it depends on how students phrase questions or if they phrase it almost, is this right? Or, you know, because if I told it, I asked it to tell me that -this shows my boredom. But I asked it to tell me what the date was. Let’s say, for example, it was the June 19, and it told me, and I said, no, it’s not, it’s the June 20. And it said, oh, I’m so sorry. You’re right, it’s the June 20. I said, no. It’s not. It’s the June 19. Why are you lying to me?  And I asked it, why is it so agreeable? And it was like, this is not my intention. You know? And it’s like, it’s really, this is a thing. You can tell it. You can feed it nonsense, and then it will go, yes, that nonsense is perfect. Thank you. You know? 

Paul Gilbert: Yeah. But this sorry. Like, to go back to I mentioned earlier Emily Bender, the linguist who writes a lot about large language models. And something that she says I think is really important is they don’t create meaning. If you impute meaning from what they create, that’s on you  Right? But these are synthetic text extruders that produce statistically likely strings of words. 

Heather Taylor: So if I’m saying something, you go, no, that’s probable then.  

Niel De Beaudrap: It doesn’t sound obviously wrong.  

Paul Gilbert: But if you take that meaning as meaning that it can think, that’s on you. Right? It’s just a thing that responds to prompts and spews out words and it looks like it can think but it can’t. 

Niel De Beaudrap: That’s been going on for ages as well from the earliest chat bots and like back to the 1960s, Eliza, where a lot of people were convinced right from the very beginning that they were talking to somebody who was real or that something that was actually thinking, this might have been partly because of the mystique of computers in general. You know, the mystique of computers changes, but people still remain impressed by computers in a way which is slightly unfortunate, I think. Like, they’re very useful devices. Speaking as a computer scientist, I mean, I enjoy thinking about what a computer can help me to do, but it’s good to have an idea of what is realistic to expect from them. And people have often sort of been looking for it to be something with which they can talk and even something that was just responding a sort of to a formula, something that in the 1980s everybody could have on their personal computer, this is not something which is very deep. 

People are looking constantly for something to be relating back to it as a person. So you always have that sort of risk, but yeah, if it’s not actually accessing any information, if it’s not using the information in any particular way that could actually produce novelty and for which you can have confidence, that that novelty might mean something because it’s in some quarters correspondence with a model informed by the actual world around it. You always have that risk of people imputing upon it a depth, a meaning, and a usefulness that is far beyond what it actually has. 

Wendy Garnham (18:10): I think we’ve touched on this a little bit, but Paul, do you think generative AI is affecting how students value subject expertise? And if so, in what way and what impact does it have? 

Paul Gilbert: It’s a really good question and it’s quite a hard one to answer. I think, you know, you can imagine a risk where people think, oh, what’s the point in having a specialist because I can just ask – it can tell me everything. Right? But we know it can’t, and I think a lot of students are pretty switched on to that. And, again, I think this is why it’s important to embed some of that, like, critical data literacy and critical AI literacy into the classroom. Just to pick up on something Niel said about whether or not these large language models can produce novelty, produce new knowledge that is meaningful, I think it’s also worth thinking a bit more deeply about what we mean by subject expertise, which isn’t just having access to loads of references and being able to regurgitate stuff. Right? Leaving aside the fact that, large language models often get references wrong and make them up, right? Let’s just pretend they can do that. 

That’s still not what subject expertise is. Right? And a lot of it is about developing certain styles of thinking, certain critical capacities, abilities to see connections, right, in the social sciences. In in one way or another, a lot of disciplines talk about the sociological imagination or the ethnographic imagination or the geographical imagination. And it’s about a certain way of thinking and making connections. And having a kind of imaginative capacity that comes along with subject expertise, I think, is really important. There’s a bunch of work that’s been done by some people in Brisbane, in Australia, where they have queried a whole bunch of different chatbots and different generations of the same chatbot and so on, to ask about ecological restoration and climate change. And aside from the stuff that by now, I think, hopefully, most of us know about these large language models, that it returns, like, 80% of the results are written by Americans and white Europeans, that it ignores results from a lot of countries in the global south that do have a lot of work on ecological restoration, all these kind of, things which we call biases, but I see sort of structural inequities in the way these models are trained. I think one of the most interesting things they found, and again, it makes sense based on what we know about these models, is that they never tell you about anything radical. Because again, it’s a backwards looking probabilistic thing. Right? What’s the best way to deal with this problem? Okay. Well, it’s going to give you something from its training corpus, which is probably based on the policies that have been done in the past. Except if we, you know, we are facing a world of, runaway climate change and things that have been done in the past are not the things we need to keep doing. Right? And it just would not answer about things like degrowth or agroforestry or anything that, you know, I wouldn’t even think of agroforestry as particularly radical, but, you know, it’s not mainstream. Right  It’s not been done a lot before. And so they just don’t want to talk about it. Right? And having the capacity to look at a problem, know something about what happened in the past, think about what the world needs, and be creative, be innovative, and have an imagination is something that a lot of our students at Sussex really have that capacity, and that absolutely cannot be replaced by a large language model. Right? And I think, as well as, emphasizing that ourselves, we need to encourage them to become aware of that capacity in themselves. Right? That, you know, you might feel overwhelmed or demotivated or you like, you want to ask ChatGPT or Claude or whoever, but, like, both what we are trying to, get across as subject expertise and what we want them to leave with massively exceeds anything that this kind of very poor approximation of intelligence can offer. 

Wendy Garnham: Yeah. It sounds as though there’s a big role to play for like active learning, innovation, creativity in terms of how we’re assessing students and how we’re getting them to engage with this subject material I guess. So that’s music to my ears. 

Heather Taylor (22:22): Also in the same respect as that, we’re not meant to be information machines, you know? And I think if a student came to uni hoping to meet a bunch of information machines, well, they’d be wrong. Hopefully, not disappointed because it’s better than that. But, you know, I think also that, you know, teachers have the ability to say when they don’t know something and to present questions that can help them and the students try and start to figure out an answer to something. And, you know, I really love it actually when I’ll make a make a point or an argument in a workshop. 

I had this last year with one of my well, last time, one of my foundation students, and I was very pleased with my argument I put forward. I was showing them about how you make a novel but evidence based argument, you know, so where you take pieces of information, evidence from all over the place to come up with a new conclusion. And I was very pleased with this. Anyway, the student of mine, she was brilliant. She rebutted my argument, and it was so much better than mine. Right? It really was, and it was great. And that’s sort of as a teacher, that’s what you want to see happen. And I think with things like ChatGPT and any of these AI things, they’re not going to do that. They’re not going to encourage that, and they’re not going to know how. You know? They’re not there for aren’t asking questions. They ask you questions to clarify what you’re asking them to do, but that’s it. And I think, yeah, I completely agree with you. Students can get so much more out of their education, you know, by recognising that they’re so much more than information holders.  You know? 

Wendy Garnham: So same question to you, Niel. Do you think generative AI is affecting how students value subject expertise? And if so, in what way and what impact does it have? 

Niel De Beaudrap (24:15): I think it does affect how students value subject expertise. This is something that I see when assessing final year projects or even just seeing what sort of project students propose for their final year project, where in computer science, as you can imagine, a lot of students are proposing things that involve creating some sort of model, not a large language model, but, you know, something that’s going to use AI to solve this and that, where they seem to have a slightly unrealistic expectation of how far they’d be able to get using their own AI model, and in particular, where they’re contrasting this to the effort that’s required in order to solve things with subject expertise, where they seem to think that this is something which is easily going to at least be comparable to, if not match or surpass the things that can come from people who’ve spent a lot of time thinking about a particular situation, a particular subject matter. They think that a machine that’s just by crunching through numbers quickly enough is going to be able to surpass that. And, you know, they, of course, learn otherwise to a greater or lesser extent, basically to a greater or lesser extent that they notice that they have not actually met their objectives, that what they hoped to be able to achieve. The fact – it’s more the fact that they have that aspiration in the first place. Now I mean part of it of course again in computer science, there’s going to be a degree of neophilia. They’re enthusiastic about computers and why not. They’re enthusiastic about new things that are coming about computers and why not. It’ll be just a matter of the learning process itself that maybe some of these things aren’t quite all that they’re hyped up to be. But that’s sort of where they’re starting up from, this idea that just sheer technology can somehow surpass careful consideration. I find that a little bit worrying. 

Heather Taylor: 
Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners? 

Niel De Beaudrap: 
I’m not actually an education expert, so I can’t really say whether it’s likely that there are ways that generative AI can help, come up with ways, you know, to help student learning. I have seen examples where, a native Chinese speaker was trying to use this to translate my notes into Chinese. So, okay, there might be some application along those lines, just trying to find ways of, translating natural language whether we know, you know, a generative AI such as we’ve been thinking about them now to do that well, I’m not in a position to say. Conceivably, as I sort of thought about before, maybe they can be used to come up with, examples of toy problems or, simple examples. I guess you could say supplementing the course materials in order to try to come to grips with some particular subject. 

 
That’s something that I can imagine one might try to use generative AI where it’s possible that things won’t go very badly wrong. Apart from that, I guess I’m sort of, viewing things through the framing of the subjects that I teach in my own particular interests, which basically involve the interconnectedness of a lot of technical ideas and ones that I find fascinating, so I’m going to be extra biased about that sort of thing. About, you know, learning about various ways that you can understand and measure things and structure them in order to get a idea of a bigger whole of how you can solve problems. And you can’t solve problems without being able to have the tools at hand to solve the problems. Okay. Some people might say, okay, maybe an AI can be such a tool, but before you can rely on a tool, you have to know how to use it well. You have to know how it’s working. You have to know what the tool is good at. So even if you want to say, shouldn’t this just be one tool among others, I don’t see a lot of evidence of, first of all, where it is a reliable tool. The thing that you absolutely have tohave of a tool is that you can rely on it to do the job that you would like it to do. Otherwise, I mean, you can use a piece of rope as a cane, but it’s not going to be very helpful to you. Maybe you just need to add enough starch or maybe you should look for something other than a piece of rope. And this just builds upon itself. The way that you build expertise, the way that you can become particularly good at something is by spending a lot of time thinking about the connections between things, by looking, asking yourself questions about, you know, what does this have to do with that? You know, is that thing that people often say really true? Only by sort of really engaging yourself in something like this can you really make progress on that and really become particularly good at something. And by sort of devolving a certain amount of that process okay. Again, there’s a reason why I’m in computer science. There are some things, like summing large columns of numbers. Is it possible that we have lost something by not asking people to systematically sum large columns of numbers? 

Have we lost something important about the human experience or maybe just the management experience by devolving that to computers? Well, there will be some tasks, maybe, where it is useful to have a computer, not speaking of LLMs, but computers in general, having them solve problems rather than having us spending every waking moment doing lots of sums or doing something more or less tedious. There will be some point at which the trade off is no longer worth paying. And I believe that trade off, you know, happens well below the point where you are trying to really come to grips with a subject and with learning. So the only thing that one really wants to have a lot of for learning is a lot of different ways of trying to see something, a lot of different examples, a lot of ways of trying to approach a difficult topic. And beyond that, cooperating with others, that’s a different form of engagement. It’s a way of swapping information with somebody, ideally, people who are also similarly engaged with the subject. It doesn’t help, of course, for you to ask somebody who is the best in your class and then just take their answer. That’s the same sort of problem that one has with LLMs, is relying on something else without engaging on the subject yourself. The most important thing is to sort of try to draw the line at a point where the students are consistently engaging with the learning themselves with the difficult subjects. And if the things that we, if the resources that they usually have to hand aren’t quite enough, well that’s not necessarily something that you solve with LLMs. You can solve that problem by providing more resources generally, and that’s a larger structural problem in society, I think, that maybe can be drawn on, but that’s not quite about what LLMs are about. 

Wendy Garnham (31:07): It sort of sounds a little bit like we’re saying that the purpose of education is changing so that it’s more about encouraging or supporting students to be creative problem solvers or innovative problem solvers. Would you say that is where we’re heading? 

Niel De Beaudrap: I don’t know if I can say where we are actually heading. I think, obviously, it’s good to be a good creative problem solver. And there has always been, I guess there’s the question of whether or not originally we spent more time doing things by rote. You know, you solve an integral and you don’t ask why you’re solving the integral. We’ve had tools to help people with things like, you know, complex calculations, a little, you know, slightly annoying picky fiddly details for a long time. Like so for example, a very, very miniature example of the same thing that we have with LLMs in mathematics is the graphing calculator, where you had something that was a miniature computer. It wasn’t wouldn’t break the bank, although it’s not as though everybody could afford it. But you could punch into it, ask it to solve a derivative, to solve an integral, to plot a graph. All sorts of things that, once upon a time were solely the domain of people, like before the 1950s and that, you know, particularly how to sketch a graph was something that was actually taught. Here are the skills, here are the ways that you can not precisely draw what the graph is like, but to have a good idea what it’s like, it would give you a good qualitative understanding. And even now, even though I do not draw very many graphs myself, the fact that I was taught that means that I have certain intuitions about functions that maybe somebody who hasn’t been taught how to do that, wouldn’t have. Now does that mean that I think that we should, you know, basically everybody should always be doing everything all the time with pencil and paper? No. There will be some sort of trade offs. 

It’s a question, you know, with the creative problem solving. It’s good to have people spend more of their time and energy trying to solve problems creatively, to think about things, engage with them, rather than being constantly boiled down, you know, caught up in doing things by rote. Now as far as the creative part goes, well you know, if you put the students in a situation where they are tempted to put the creative part into basically the hands of something which or the digital hands, the metaphorical hands of a text generator, then it’s not going to teach them how to engage with their subject in a way that they could deal with it creatively. It’s like most things that you have to know what the rules of the game are before you can interpret them well or before you can know where you can break them. If true creation is coming up with something which hasn’t been seen before where you realize actually we can do things differently and, of course, in mathematics and computer science, you’re concerned also with whether it’s technically correct. 

The rules that we have in place aren’t in place because it is the only correct way to do things. It is because the way that we can do things that we are confident will work out. That doesn’t mean that is the only way that things can be done. But you can only see these things if you know how the system was set up in the first place. If you know how to work with the existing tools that we have through the mastery of them, can you figure out how you can do things differently in a way that will work well? And if you always basically devolve all the hard stuff, the so called hard stuff, to a computer that’s not, of course, not going to tell you anything new, certainly not if it accidentally spits out something at random that, is sort of a novel random combination of symbols, it won’t be able to tell you why it should work in any way that you can be confident of. You need to be able to have the engagement with the subject itself in order to even recognize anything that could be an accidental novelty. 

Heather Taylor (34:59): Same question to you then, Paul. Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners? 

Paul Gilbert: So my instinctive answer is, honestly, no. I don’t think there are benefits. Right? Maybe there are, but I haven’t been shown them yet. Right? 

And I think the reason I want to say that is because there is so much hype and so much of a framing of inevitability about these discussions that frequently people are saying, well, if we haven’t figured out how they’re going to improve student learning, don’t worry, it’ll come. Right? And we’ll know. If you’re going to change pedagogy and change the curriculum and change how we teach, show us first how it’s better. Right?  I think there’s a reasonable way around to do that. Tech journalists, I think it’s Edward and Waiso, or possibly another journalist, talks about the way people tend to treat LLMs as stillborn gods. In other words, these are like perfect all powerful things that just aren’t quite there yet, right? So if you just hang on, they’ll improve our learning. Right? And we saw this in the discussions we’ve had at the university that the Russell Group principles on AI make a whole series of claims about how AI can improve student learning without a single example or footnote or reference. Right? And yet we can find pedagogical research showing that it can actually undermine confidence, undermine motivation, all these kinds of things. Right? So, until people can show me that it’s good, my instinct is to say no. 

And I know, you know, to dig into that a little bit deeper, I know there are claims made about how it can be important for assistive technologies. And it might be that that’s true in certain cases. But there’s equally evidence that it can be disabling. And there are some colleagues in Global, Lara Coleman and Stephanie Orman, working on this. And part of this is also about the what it means to learn.  

So something I sometimes go through with my students when I’m trying to explain what we want from them, why giving example after example after example after example won’t get you a first.  And I show them that Bloom’s Taxonomy Triangle. I don’t know if you come across that in teacher training and stuff, where it starts at the bottom, the bottom layer is like recall. You show you evidencethis learning by just reproducing something you’ve memorised, and it goes on to understanding.  You show, you know, what it means, how it works, and then you kind of apply your knowledge to new situations, draw connections between it, evaluate it, create something new. Right? And so this is a useful tool to show, like, if you keep just building loads of the bottom layer with more examples, you’re going to not get higher than a 2:2. Right?  

Heather Taylor: That’s very good to know, by the way.  

Paul Gilbert: And equally, if you go straight to, like, I’m going to create something new at the top, but you haven’t built a foundation of knowledge, then you’ve got a rubbish pyramid that will fall over. 

Heather Taylor: What’s that called again? 

Paul Gilbert: Bloom’s Taxonomy. It’s now quite old, like, constructivist pedagogy example. 

Heather Taylor: It’s very useful, though. 

Wendy Garnham: Put it into ChatGPT. 

Heather Taylor: Yeah. I like it –  ‘You’re just at the bottom, mate’. 

Paul Gilbert (38:16): But it kind of it’s also useful to show that just learning things and regurgitating them is not what we’re looking for. That’s not the kind of higher processes of learning. Right? And I think if I’ve been a bit disappointed to see people embrace the idea that, LLMs can be really useful in, creating study guides or creating summaries of articles and so on. So, well, because what you’redoing there is you’re outsourcing the process of creating the understanding, creating the knowledge object that moves you up the pyramid. And then if instead of understanding a text, you asked ChatGPT to summarise it for you, you’ve essentially just stuck yourself in the remember and regurgitate bit lower down because you’ve made it shorter, but you’ve not done that work to generate the understanding and the applications and make those connections with other readings you’ve had and the kind of things Niel was talking about. And so I’m really cautious of a lot of the hype and inevitability framing around the idea these are going to be great for assistive technologies. They’re going to improve people’s learning and stuff. If that’s the case, evidence it before spending loads of money and reshaping higher education around it. And I think, you know, something else I wanted to add on to that is, you know, you were also talking a bit about, there are potentially maybe cases where LLMs could be useful. Right? You could create little problems or you know, okay. Sure. And there was an article recently in the Teaching and Learning Anthropology journal that sort of made this case. And superficially, when you start reading it, you think, okay, they’re talking about something similar to what we’re talking about here. Right? They’re saying, we want to reject this model of education that’s just regurgitating facts. Right? I think we’re all on that page. And ChatGPT is an opportunity to get students to interrogate ideas, to ask what’s wrong with things, to engage in dialogue. And they even use the language of, the Brazilian educator Paulo Freire to criticise the banking model, right, where you just pour ideas into the passive student’s head and they vomit it out. Right? None of us want to be doing that. But what kind of got me a bit frustrated with that paper is the other half of what Paulo Freire was talking about – banking education is bad. What you want is critical pedagogy, where people come to an understanding of their place in the world and the structures that create oppression and unfreedom so they can use that knowledge to make a better world to achieve liberation. Right? And it genuinely boggles my mind that people will invoke Freire in a discussion of ChatGPT and not mention the political economy of AI. 

Right? It’s deeply frustrating that there’s this sort of sense that, oh, well, it’s incidental to this whole discussion about education that we’re talking about some of the greatest concentration of wealth and power in history. Right? That is also premised on an absolutely insane expansion of energy and water usage. And it’s hardwired into the model. Because of this understanding that these models are, you know, stillborn gods and they’re going to be perfect when we get them right, that’s the justification for Zuckerberg, Sam Altman, all of them. Their model is all about scale, more and more, bigger and bigger, more data centres, more, in video processing units. And that means more energy usage, and it means more water to cool those.  

And so I think last year, there was a study that showed worldwide AI data centre usage emitted the same amount of carbon as Brazil, right, which is a big agro industry emitter. There have also been studies from UC Riverside suggesting that by 2 years’ time, the worldwide data centre freshwater usage, to call it, will be about half of UK water usage. Right? And that’s concentrated in certain areas, typically in low income areas. And all of those studies were done before over the last few weeks. Zuckerberg’s announced 23,000,000,000 investment to expand data centres. OpenAI is trying to spend 500,000,000,000 building data centres, mostly running off fossil fuels. Right? Mostly located in low income, often water stressed environments. Right? So if that is the pathway to finally getting it right so that this model works, right, and, oh, yeah, we’ll iron out those hallucinations and it won’t give you fake references anymore, genuinely, what is wrong with you that you think that is a good pathway to an educational future? Right? Like, I don’t understand it. And Sussex has these sort of ambitions to be one of the world’s most sustainable universities and everything, and you can’t bracket that and pretend like it’s not applicable to engagement with technologies that have this political economy, that have this political ecology. 

And that political economy follows exactly from the claims about what they can do. Right? All of the claims about their magical power is based on scale. Right? These things are powerful because we’ve trained them on more data than you can possibly imagine. And we have more data centres, powering the models that are responding to more queries than ever that, you know, you couldn’t even imagine the scale of it. Right. Great. That pathway to refining these models is essentially ecocidal. Right? This is before you even get into the labour stuff and the fact that, you know, I really dislike the language of the Cloud because it implies a virtualism. Oh, The Cloud. Right? The Cloud is made of copper and plastic and lithium and silicon and stuff that is ripped out of the Earth.  

Niel De Beaudrap (44:02): So It invites you to think about it in an extremely vague way. 

Paul Gilbert: Yeah. Right. Whereas, actually, what it is data centres largely in low income water stressed communities. Right? The same day that, last week, there was an FT leader, Zuckerberg, seeking 23,000,000,000 from private equity to expand his data centre. He’s just, there’s a story on a tech journalist web website 4 0 4 Media on the same day, but a community in Louisiana, one of the poorest towns in the state, that is going to have their utility bills go through the roof because a bunch of data centres are being built by OpenAI, which require construction new gas plants, and all those costs have to be paid for by someone, and it’s probably not going to be OpenAI. Right? Yeah. That is the sort of backstage on which these magical LLMs are unfolding. Right? And I think if you’re willing to have a discussion about the educational benefit of these tools without situating them in that political economy, that political ecology, you know, certainly, as someone who works in a kind of development studies department, that’s like a massive, you know, intellectual and moral failing.  

Heather Taylor: So, essentially, even if so, I mean, like you were saying, it’s all hypotheticals about whether eventually they can get AI to be magic. And there’s also obviously lots of, even before you go thinking about the environmental implications, there’s lots of implications of if you could make AI that perfect, what’s going to happen to people’s jobs and, you know, there’s all that side of things as well. But so, essentially, even if, this is a question, even if AI were to be magic eventually, yeah, and do everything that they want it to do or that we theoretically want it to do, I have no idea what I want it to do, The cost would be the world burning. 

Paul Gilbert: Yeah, some people don’t want to have serious discussions about that. That’s fine. But then just, you know, you’re not a serious person, I guess. 

Heather Taylor: I didn’t – I knew that you had environmental – I honestly, this is like this shows my ignorance, really. But I knew that there was environmental consequences to AI. I did not know it was this deep or where the consequences were being worst felt, which is horrible. 

Paul Gilbert: Yeah. And, you know, it’s utterly bizarre that a lot of data centres are being built in some of the most arid parts of The US, which are already water stressed. So aside from the massive ecological consequences of drawing down further freshwater, diverting it, I don’t know. What has happened historically in The US when you’ve had water diversion for industry, especially agro industry, is massive fire risk. Right?  We’ve all seen what happens to California in the summer now. And now loads of data centres are being built across the arid parts of California, and they will not work. They will catch fire and shut down, and the models will stop working if they’re not cooled with vast quantities of fresh water. Right? So the guys building them have every intention of cooling them down with vast quantities of fresh water. 

You don’t spend $500,000,000,000 on a data centre package if you are comfortable with it melting straight away. Right? So there is a genuinely huge ecological threat and the livelihood threat associated with that, which almost always lands on the most marginalised communities because, you know, when people dump massively polluting and water stress creating industries, they don’t usually do it in the most affluent neighbourhoods, right, because those people are well organised and well networked and everything. And, yeah, this is a serious part of it. And I think that the rush to scale that has seen this sort, of everyone’s just seems to have accepted that the only AI future is the one that we’re allowing Zuckerberg and Altman and people to lay out for us, which is 3 or 4 giant companies compete to buy up all of the world’s NVIDIA chips and create more and more data centres. Right? Maybe there are things that AI can do differently that don’t require this more and more bigger and bigger scale operation. But if that’s the path we’re going down, right, 

Heather Taylor: And there’s already an energy use problem, though, isn’t there? There’s if there’s already an energy use problem, you know, so we’re kind of using energy for something we don’t need because we didn’t have it a little while ago. So it’s not something that we need. You know? Yeah. So even before we think about making it bigger, the fact that it’s even in existence now is a quite a concern. 

Paul Gilbert: And this is also why I find this sort of inevitability frame that people use to talk about this so troubling. Right? Whenever someone uses the framing of inevitability to talk about a new technology, they’ve got an interest in it. Right? Because it hurries things up and it presents people who have questions as, you know, just getting in the way because this is going to happen, right?  So get on board. And this is explicitly what we’re hearing from our local MP, our science technology administrator. 

Heather Taylor: That’s what I thought, and I’ve not got any stakes in it.  

Paul Gilbert: Things are made to be inevitable because powerful actors tell you they’re inevitable. There’s nothing inherently inevitable about this.  Most people don’t even know what it can do when it works, and yet we’re accepting it’s inevitable. Right? I think that’s 

Wendy Garnham: Is there another side to the inevitability, though, which is that it’ll eventually fold in on itself because eventually you’ll be feeding the machines with information that it itself has generated, I mean, is that a possibility? 

Paul Gilbert: Possibly. I mean, the Internet is already so full of slop. Right? Just AI generated garbage. And there are examples of that.   I think I can’t remember who it was earlier in the discussion. I was talking about translation. Right? And some of the folks at Distributed AI, research institution, Timnit Gebru and colleagues, looked at these claims that Meta and others have made about massive natural language processing models that could translate 200 languages. Right? 

And they looked at a series subset of African languages, which the research team spoke, and they found that it was really bad. Right? But it was also bad because it had been trained on websites that were translated by Google Translate. Right? And then so when you gave it vernacular, or vernacular twee, it was just absolute rubbish came out. Right? So that’s already happening. And then you think, well, what was that? Like, what is it for? Just the kind of like you were talking about neophilia. Right? People want new things. People want to move fast and break things – but why? What benefit is this going to bring us and is it worth it? 

Niel De Beaudrap (50:40): Yeah. Yeah. It’s the slogan of a particular company to move fast and break things, and they had reasons for wanting to do that. It’s because it made them money. 

Paul Gilbert: Yeah. Yeah. And speaking about those companies, you know, you’re saying this is a recent thing. A few years ago, the Silicon Valley giants were presenting themselves as green. Right?  Go back a couple of decades. Google’s slogan was Don’t Be Evil, which I think just, you know, became funny after a while. But, like, you know, a few years ago, Microsoft was promising they weren’t only going to be, like, carbon neutral. They’re going to become a negative. Right?  Really leaning into renewable energy, carbon capture and storage, which is a whole other story about how it may not actually work. But, you know, there was this sort of green vibe they were going for. After the kind of boom in , LLMs from the ChatGPT 3 launch, they’ve all just chucked their carbon neutral policies, right, straight out the window and back to coal, back to gas because we need those data centres. Right? Is this a good time to be doing that?  When this is of unclear utility to us, and potentially poses a threat to jobs and can’t do half of the things 

Heather Taylor: By the way, it’s boiling in here. 

Paul Gilbert: Yeah. Yeah. So, you know, can we help students use these tools in ways that are ethical? Well, we’ve got to ask, are these good for our students? Right?  Can we actually evidence that, not assume it because someone has told us it’s inevitable? And once we’ve figured that out, is it worth it? Right? There’s a whole bunch of things we can all do that make our lives easier. Are they all worth it? 

Wendy Garnham (52:15): 
So that brings us to our last question. So, Niel, I’m going to direct this to you first. Obviously, educators will have varying views on AI in higher education. But for now, it is something that we all must contend with. So with that in mind, what advice would you give to colleagues in terms of AI in higher education? 

Niel De Beaudrap: Apart from sort of acknowledging that it’s there and maybe sort of addressing students and encouraging colleagues to address the students to sort of talk about AI and the things that it purports to offer and how it might fall short, the main thing that I would do is basically to not use it yourself, to not feed into it. Like, the more that we try to use these tools in order to cut corners and things like this in our own work, not only is it providing a bad model for the students, like, I think that students can tell when you put in some efforts into anything from designing a script for your lectures or your slide deck or a diagram or something like this. If they see it modelled for them that it is quite normal to just ask a computer to spit something out according to some spec, they’re just going to think of it as a thing that one should do. It’s a thing that I systematically don’t do. I’ve never used any of these tools because I’m not interested in feeding the machine that I think is going to undermine the things that I care about. I would encourage colleagues to ask themselves if they actually want to be using a machine that’s going to have this sort of effect. 

Wendy Garnham: 
Right. Same question to you, Paul. What advice would you give to colleagues in terms of AI in higher education? 

Paul Gilbert: 
I think my answer is quite similar to Niel’s. I fully agree. You know, don’t use it and then expect your students not to. Yeah. Come on. I have used it twice to show my second years how terrible the answer to one of the assignments would be if they asked ChatGPT and why they are better than it. Right? And I think what we can do, as well as kind of getting our students to think about the data infrastructures behind this, its limitations, what it can’t tell them, how not smart it is, how destructive it can be. We can also just, I think, put more work into highlighting for our students how much better they are than these models. Right? There’s all this discussion of, like, oh, you know, we’re going to replace, like, lawyers and radiologists and stuff. And, you know, I’m not sure how true that all is, you know, if you replace all the junior lawyers so no-one has to read through court documents who’s in five years going to be a senior lawyer. Right? You gotta have the foundations. Right? 

So, again, there’s a lot of inevitability frame and hype, which I think we need to cut through. But, also, like, we have a lot to offer in higher education to our students, and they have a lot to offer us that cannot be replicated, reproduced, or displaced by ChatGPT and finding space in the classroom to emphasise that. Right? Even if that is explicitly saying, look, like, this is garbage. It’s powerful and it’s big and it’s fast and it looks shiny, but you guys are smarter than it. Right? And I’m not just saying that genuinely, like, my students produce better stuff than AI could, and I think that’s true for a lot of us, right? And we need to give them that, like, trust and meet them on that terrain rather than assuming they’re all, you know, just itching to fake their essays. Yeah. And then, you know, some people will, and that’s always been the case, and, you know, it will continue ever thus, right? There’s always been plagiarism and personation. We can’t stop it, right? But we can, highlight the things that AI can never do and encourage our students to value that in themselves. 

Niel De Beaudrap: There’s something that I could add, by the way. So there’s something that I’ve been thinking about that I – it feels a bit weird to ‘yes and’ what Paul was saying about the ecological and the sociological impact, which as far as I’m concerned, should be the sort of the conversation stopper in terms of the ethical use of AI, but about the notion that these are stillborn gods that we can just work to improve them. Well, I mean, this is a part of the technophilia, sort of the technophilic impulse in the computer industry generally. We can sort of look at theother things that happened at scale. For example, Moore’s Law, where computing power became larger and larger, computers became faster and faster. This didn’t always make our software better. In fact, it made it worse in a lot of respects because people stopped valuing writing code well. So even as things scale up, this isn’t going to be a guarantee of an improvement in quality. In fact, if the past is any indication, things will get worse. So even after burning the world, we may not have anything even particularly nice to show for it. 

Wendy Garnham: Simon, as a learning technologist, do you want to add anything before we close our podcast? The role of AI in higher education. 

Simon Overton (57:34): I think, the only thing that I would say that I feel is perhaps a little bit hopeful, and it’s not just limited to education, is that I believe that the use of AI and the proliferation of slop, great band name, by the way, is going to lead us to value things that are real a lot more. When I was at university, I really loved the essay, The Age of the Work of Art in the Age of Mechanical Reproduction, I think it’s Walter Benjamin, which was that people were worried that if we could produce posters of, you know, the Van Gogh Sunflowers that we wouldn’t value the original anymore. But that’s not what happened. It actually became more and more valuable. So I think and I hope and I believe that it’s all quite new and quite scary for us now, but I think that it will encourage us to value the things that Paul said just now that we can have and that can come out of the time and the relationships that we establish it in higher education. So and I think that ultimately that’s probably a good thing even though it looks kind of scary. 

Heather Taylor: I would like to thank our guests, Niel and Paul. 

Niel De Beaudrap: Thank you. 

Paul Gilbert: Thank you. 

Heather Taylor: And thank you for listening. Goodbye. This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes, as well as articles, blogs, case studies, and infographics, please visit blogs.sussex.ac.uk/learning-matters

Tagged with:
Posted in Podcast

Making (class)room for AI: Integrating custom GPT into teaching and assessments

A photograph of Gabriella Cagliesi. Smiling in front of a window.
Professor Gabriella Cagliesi (Economics)

Gabriella Cagliesi is a Professor in the Department of Economics and Teaching and Learning Lead at the University of Sussex Business School. Since joining Sussex in 2019, she has championed innovative and inclusive teaching, earning recognition for initiatives that close attainment gaps and enhance student experience. 

With a PhD in Economics from the University of Pennsylvania and over thirty years of teaching experience, Gabriella is research-active in applied international macro-finance, applied behavioural economics, and empirical studies on labour markets and educational choices and policies. She also collaborates on projects that support widening participation and enhance student outcomes.  

This case study illustrates how Gabriella integrates generative AI into her teaching, encouraging students to use AI as a study tool and engage with it critically and creatively.  

1. How do you bring generative AI into your teaching? 

I integrate generative AI through a customised Chat GPT environment, rather than students using open platforms. This involves creating a closed system where I upload teaching materials, define the AI’s role, and set clear boundaries on what it can and cannot do. In seminars, students work in groups using this custom AI tool, and I emphasize ethical use and the risk of hallucinations of even a closed and bespoke AI platform. AI plays different roles across the teaching sessions, such as a teammate (acting as a devil’s advocate), tutor, Socratic teacher, simulator, or podcasting assistant. After each activity, students reflect on AI’s role and submit their interaction logs, which I review and provide feedback on. 

2. What approaches do you use to integrate AI into your teaching, and why have you chosen them? 

I use three main approaches: 

  • Interactive learning design: AI is embedded in classroom activities to encourage experimentation, scenario testing, and critical thinking. 
  • Teaching material development: AI helps create study guides and summaries of teaching, accelerating routine tasks while maintaining transparency with students. 
  • Assessment integration: AI is incorporated into assessments through synthetic datasets and reflective tasks, requiring students to critique prompts and evaluate AI’s limitations. 

These approaches were chosen because they align with my discipline (economics), promote higher-order thinking skills, and prepare students for real-world applications of AI. 

3. What impact has AI had on student learning, curriculum design, or academic practice? 

AI has significantly enhanced student engagement and understanding. Students report that simulations and visualizations help them grasp complex concepts better than formulas alone. It has improved critical thinking, as students learn to question assumptions and evaluate AI outputs. Curriculum-wise, I redesigned assessments to include AI-enabled tasks and reflection components, shifting focus toward interpretation, reasoning, and AI literacy. For me professionally, this work has led to invitations to present at conferences and collaborate across departments, fostering broader pedagogical discussions. 

4. Looking back, what would you do differently? 

Initially, I introduced AI only during seminars, but I now realise the value of pre-class integration. Allowing students to explore AI before sessions would have deepened engagement. I would also have focused earlier on student learning through AI, rather than policing its use. Designing prompts that encourage reflection, and reasoning has proven more effective than simply controlling access. 

5. Three practical tips for fellow academics: 

  1. Build your own AI literacy and confidence: Experiment with tools to understand their capabilities and limitations. Confidence in using AI translates into better student experiences. 
  1. Shift from content delivery to challenge design: Create tasks where AI supports but does not replace human judgment. Clearly define acceptable uses and disclosure requirements. 
  1. Use AI to deepen reflection, not replace it: Incorporate reflective activities where students critique their own reasoning and AI’s output. This fosters metacognition and critical engagement. 
Posted in Case Studies

Episode 12: Talking to Students about Generative AI

Photograph of our two speakers, Carol and Tom, of our producer Simon, and of our presenters Heather and Wendy.
Our guests, Prof Thomas Ormerod and Prof Carol Alexander, our producer Simon Overton, and our presenters Dr Heather Taylor and Prof Wendy Garnham.

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our twelfth episode is ‘talking to students about AI’ and we hear from Professor Thomas Ormerod and Professor Carol Alexander.

Recording

Listen to the recording of Episode 12 on Spotify

Transcript

Wendy Garnham & Heather Taylor:  

Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Talking to Students about Generative AI. And our guests are Professor Carol Alexander, Professor of Finance, and Professor Thomas Ormerod, Professor of Psychology. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome everyone. 

Heather Taylor:  

Carol, how are you currently talking to students about generative AI, and what kinds of support or guidance do you provide to help them use it responsibly? 

Carol Alexander:  

Well, I teach two modules this term, one graduate and one 3rd year undergraduate. And the 3rd year undergraduate, using generative AI is one of the learning outcomes. I teach them how to use ChatGPT or Claude. I prefer not to use Claude. It’s not great for finance. For finance ChatGPT is clearly the best. I teach them ChatGPT in workshops, at least the chap who’s doing the workshops does. We design the workshops so that, at the beginning, they understand the importance of context engineering. It’s setting your project instructions well, at least setting your basic personalisation, and then project instructions, keeping things tidy with projects, managing, so called memory. It’s not actually memory because every time you send a prompt, the entire previous conversation is attached to it. So realising that stuff early in the conversation can get lost and the importance of recapping that, telling it to forget certain things so that this context window, which has a certain size, it has a certain number of tokens – tokens here means bits of words and so forth. You might put in a very inefficient prompt, which is a few words long, and then it responds with 10 pages because it doesn’t really know what you want. And that is what you don’t want to do. You want to engineer each prompt you use to get an efficient output, not just efficient from the point of view of the students’ learning, but also from the point of view of energy consumption as well. We go through all that, and then in the context of the financial risk management module, they do their workshops. I mean, not always using ChatGPT, but there are specialist GPTs, there are XLAI, and various other things. I’ve built a GPT called a Socratic learning scaffold, which helps them to learn anything new. And then I just set the project, which is 70% of the assessment for this. You know, what used to take days now takes hours because I use my Socratic learning scaffold and it doesn’t have to be Socratic, but that’s just, you know, getting to understand a bit of philosophy. Anyway, so they must think out of the box whereas in the past, I would not dare to set a project exercise that they hadn’t already experienced something very similar of in workshops or lectures. But now I’m setting them something that they’ve not actually done before. It’s associated, but it’s a little bit more advanced than we’ve done because they can teach themselves to do that, and that’s the first step of this project. 

Wendy Garnham:  

Sounds as though it’s like fostering creativity. And the idea of sort of getting them to take what they’ve learned before, but to build on it and do new things with some of that learning.  

Carol Alexander:  

Well, I mean, the project starts off with a paragraph, are you interview ready? Because they’re third years. And the job market out there, of course, we can get on to talking about that later if you like, but, I mean, it’s awful for white collar jobs and particularly, you know, finance in the City. Graduate roles are being replaced with AI. One person can do the output of five now. But, if they are competent with playing this piano, not just having the piano, but making a very nice melody from the piano and knowing how to tune it for whatever they’re going to play, they stand a much better chance of getting. And this project itself, I hope, will be attached to their CV, and in some way, they can, they can apply for jobs. 

Heather Taylor:  

Yeah. Amazing. Do they do they like doing it? Have you had good feedback? 

Carol Alexander:  

Because I record all the lectures for my sins. Although, I’m going to upload them to update my YouTube channel because that’s very out of date. I did that in COVID. So it’s now, like, five years out of date. But I did that for the same module five years ago. And so now I do for every lecture, two-hour lecture, there’s six videos prerecorded. And I go in there after Tom on a Monday morning, we share the same lecture room. And I just stand there and say, okay. You know, I put up a Word Cloud on PollEv and say, what do you want me to talk about? And everything is totally interactive. And I may look at lecture notes, so I very often just scribble. And it’s like the old-fashioned way of writing on the blackboard is back, although I do it on the overhead projector. And I’m gauging the engagement this year, and I haven’t seen anything like this for years now, particularly with Gen Z. You know? But there’s eye to eye contact. Very few people are looking at the phones because it’s interactive, and they’re getting what they want. They’re asking the questions. 

Wendy Garnham:  

Music to my ears as an active learning fan.  

Heather Taylor (05:54): 

So, Tom, same question to you. How are you currently talking to students about generative AI, and what kinds of support or guidance do you provide to help them use it responsibly? 

Thomas Ormerod:  

So, we’re in a very different place today. A very different place indeed. I would say to many of my colleagues, and I apologise if I’m putting speaking out of place, it’s still a threat, not an opportunity. I see it as a huge opportunity, and I’m just trying to work with the students so that they feel that too. So, I teach the final year module as well, just before Carol comes on with hers. And I am in a position where I’m talking to our students about the positives of using generative AI. And I say to them, it would be irresponsible of us to let you leave university without skilling you to some extent in the use of what must be the most powerful intellectual tool since the invention of the computer.  

As a school, though, I think we are incredibly slow to pick up on this. We don’t teach them how to use generative AI. I teach them what not to do. I teach them that it’s really good for formatting. It’s really good for all those boring things like, you know, getting the right APA formats, which we seem obsessed about in in psychology. It’s really good for data cleaning, data processing, data handling generally. It’s really good for putting in an essay and saying tell me what you think about this. It’s really good for the things like ‘I’ve written something, can you bullet point it so I can see my own structure.’ It’s really good for having a conversation with, and that’s where we need to teach them how to have a conversation. And the one thing I say that you don’t want to do is ask it to generate your content for you, because we’re very much still essay based, text based, produce a reasoned argument about. I kind of am setting ground rules in my teaching for saying you really should use generative AI. These are the good things to do, and here’s something you just don’t do. What I don’t do is I say, because this is academic malpractice. I mean, it is if you look at the regulations, but frankly, we’re not going to be able to tell or at least prove that people have written generated material using generative AI. We can often tell, but we can’t prove it. But what I say to my students is two things. First, if you use generative AI to generate your essays, you’ll probably get a 2:2 because it will be alright, but it won’t be great. More importantly, when you are a forensic psychologist sitting in a prison on your first day under supervision, you aren’t allowed a computer in a prison. You must know the stuff. You must think about the stuff. That’s the moment you’ll wish you’d use generative AI in the right way. One of the things I’ve started to do is put in assessments that, encourage them to think intellectually about the material.  

I have this thing where I get them to write a monograph chapter, where they get four papers and they have to write a chapter that would go in a monograph, that, integrates across those four themes. And then the one thing I put into this, you see, I say you must have an illustrative figure. You must have some picture that shows the point you’re trying to make about those four papers. ChatGPT at its best is bad at that. It’s bad at doing that, and the students really struggle with it as well because it’s one of the first times they’ve been challenged to be truly creative intellectually, in an essay format. I think we’re finding ways where we can use generative AI positively, that free us from the traditional essay format. And I think that’s my takeaway message to myself is rethink the assessment to fit the new tools we have. 

Heather Taylor:  

Yeah. I love that. I love the idea of the assessment. The point you made about, don’t get them to write, you know, encouraging them, not to use it to generate their content. And I think it’s possible they could get a 2:2, you know, but it depends on the topic, I guess. I sometimes ask AI when I’m bored in the morning, and I’m the like, no-one else gets up when I get up, so it’s just me in the lonely world at 5am and I ask it questions about things I know a lot about, and it gives me some nonsense back, like some real nonsense that would go against anything I would teach my students. 

Thomas Ormerod:  

Yes. I have a lovely example of this. You may have come across this example before, but I don’t know if it’s still true. But it certainly was true a couple of months ago. If you type in to ChatGPT, draw me a beautiful room, a picture of beautiful room without an elephant in it, it draws you a picture of a room with an elephant in it. And it does that because it’s being clever. It knows the expression elephant in the room, so it thinks, you know, the elephant should be in the room because of it, etc. Why would you say do it without an elephant in it? If you say draw a room without a pig in it, there’s no pig in it. So that’s the way you can test that. Now I use this as a teaching aid with my students because I say to them, you can all see that that’s quite funny. Right? And it’s that it’s not doing what you asked. What if you didn’t know what an elephant was? When you generate content, you don’t know that content. So how do you know it’s right? How do you know it’s not put in material that’s incorrect? How do you know it hasn’t put an elephant in your essay? I don’t know how powerful that is as an illustration, but they all go ooh. 

Heather Taylor (11:32):  

You’re right. And I’ve asked it to do a reconfiguration floor plan in my house before where I moved the stairs so I could fit a bathroom in upstairs, and I told it all of this. I could probably get some tips from you on how to more efficiently talk to AI, to be fair. But it gave me a floor plan. It was 3D, which isn’t what I wanted. But, anyway, the stairs were there, the bathroom was there, but the bathroom was downstairs, and the stairs ended in the bath. Yeah. And I was like, I won’t try this. 

Thomas Ormerod:  

And did you do it?  

Heather Taylor: 

Exactly. It was great. Yeah. But, yeah, it was just like, it just decided how can I fit in what she’s asking me for into this tiny space, which is my house, and it did do it, but in a completely unliveable way because it doesn’t live in a house, does it? It doesn’t know. So yeah. 

Wendy Garnham: 

But it does it with papers as well because I’ve done it with the students where we show them, like, a paper. We ask it to summarise a paper, and it tells you completely the wrong information. It pulls together ideas from other papers and it created this story. 

Carol Alexander:  

Yes. But you can control that, this hallucination. You can set up a project where the domain that it looks at is only the lecture notes. And then you can, do your prompt more efficiently. You can use the code words step by step, for example. And, also, my personalisation, which are the base level instructions, are always ask a clarifying question at the beginning of a conversation. 

Wendy Garnham:  

Yeah. I think without that input though, if students are just, certainly in foundation year, if students think we’ll just go to ChatGPT they’re putting in the essay question or, you know, the blog post titles, whatever, and seeing what comes up. And they do tend to just think that’s a useful summary or, you know, that’s really sort of saved me ploughing through. We use that as a sort of starting point to say don’t be tempted just to go away and just, like, put this question into ChatGPT because you cannot guarantee that what comes out is going to be useful, helpful, whether it’s going to be accurate. So that’s sort of how we start that whole story. But then, of course, we do our active essay writing, which we developed with Tom, where we sort of get them to start with their own ideas first. So just, you know, what occurs to you when you see this as a title? You know, what are the ideas you’ve got? What questions do you have about it? You know, so try and have confidence in your own ideas first before you turn to anything else. 

Carol Alexander:  

And I was just thinking, you know, we’re in a transition period. Soon, the students coming into university will be better than we are at prompt engineering. You know, it’s as I said, it’s like a piano or like a car. Just because you got one doesn’t mean, say, you’re not going to drive it and have a crash. But, this young generation are going to be growing up with it, and they’re going to be absolute masters at playing it like they are at video games. So only over the next few years have we got to educate at university level. By that time, we just gotta cope with the few students that actually want to come to university at that point being better at using ChatGPT than us. What value have we got to offer? Because you can teach yourself so quickly with ChatGPT or any LLM. Particularly Chat GPT is so much better than the others. 

Wendy Garnham (15:06):  

That brings us nicely to our next question. I’m going to ask this to you, Tom, first. In your discipline, where do you see generative AI offering the greatest value and what are the key risks or limitations? 

Thomas Ormerod:  

I think it could, if done correctly, be quite democratising because now there’s been in the last 10 to 15 years a huge change in the skills requirement for the behavioural sciences. In my day you just went to the pub recruited a few people to do little puzzles, you wrote down what they said and then you published it. It was easy. These days with the Open Science Movement which has basically responded to the fact that so much of psychological research just doesn’t replicate because it was done by people like me. We have a much stricter approach to methodology and so there’s a whole, there’s reams of stuff around pre-registration, there’s a use of the R environment for constructing your analyses. There’s a whole new range of statistical techniques. There’s a whole new range of qualitative techniques. I mean wow if you open up the mind of thematic analysis you find there’s a new thematic analysis method for every researcher in the world as far as I can tell.  

And it’s actually meaning that a small subset of researchers are gaining access to journals, gaining access to grants because we can’t reskill ourselves quick enough. So one of the promises I think of generative AI is it can take an awful lot of the legwork out of that and allow us to do the interesting hard thinking that we want to do properly supported by tools that say, hey, you don’t actually have to learn all these funny little commands in R that you always get wrong because you don’t put a full stop or a capital in the right place. We’ll do that. You think about great things about how the mind works. That’s my fantasy for what we should be doing in psychology as a discipline, is saying, let’s use generative AI to help us think deeper thoughts. And let’s use generative AI to help us tackle questions like how do you do belief revision? Now we’re so obsessed in the methodology of our experiments. We don’t spend a lot of time thinking about how people change their minds. And that’s the level we ought to be working at as scientists. But instead, we’re working at the level of how can I get my code to work? 

Carol Alexander:  

Yeah. Well, coding is a big, plus for, in finance, in the you know, if we’re training students to go and work in the industry, the job of a developer, you know, writing Python code and things like that is more or less, non-existent now because LLMs write perfect Python code. You just need to know how to ask the right questions to get the right code out. But I think the advantages in finance is, you have to distinguish between the advantages for educational purposes, and the advantages for research. And on the side of advantages for research, you know, it can mathematical proofs, so basically what we do is we get mathematical models of financial markets or financial institutions, and we get data, and then we apply the mathematical model to the data, and then we see whether our hypotheses about where things should happen are true, with some statistical analysis on the data that is and the models that we do the statistical tests on are derived from our basic formulation. And, sometimes it’s quite difficult to get from that basic formulation to a form of the model that you can actually test. But now, I mean, it’ll even write the basic formulation for me. So, you know, deriving an economic model of behaviour in financial markets is easy. Anyway, so that’s on the research side. But on the student side, it’s still the same because, of course, we’re educating them to do what we do in the profession, what we do in research, what we do in the profession, which is build mathematical models with some data.  

So, you know, as I said, there were these specialist GPTs, generative pretrained transformers, a particular type of LLM. There’s a whole library of these in, the OpenAI product. There are also libraries of similar sort of things in Claude or Perplexity and things like that, but they’re much, much more advanced. They these things have been produced for years, and there’s now, I don’t know, 20, 120, 200,000 of them. As I said, I made my own, so if I can do it, I’m sure loads of other people are. The most popular one is generating your own horoscope. And the second one is semantic scholar. It’s sort of helping you write essays and things like that. But there’s a lot of Excel AI, for example. You upload some data. You just – I don’t even write now because the dictation is so good on ChatGPT. All my ums and ahs, and ‘hang on. I got that wrong’. And I just have a stream of consciousness, which then, even before it’s translated and I press send into the prompt, all the ums and ahs and the corrections I made magically disappear from the translation. It’s extraordinary. I use that for writing emails. I just copy and paste it and put it in a prompt. Anyway, so I just have a bit of a mind dump and then put it into one of these Excel AIs. And at first, they use Python, then they just put it in an Excel spreadsheet. But then now, they click to view the formula in Excel. 

Thomas Ormerod (20:49):  

Yes. I did a similar thing just yesterday morning. I was prompted by Claude, it said you’re doing this inefficiently why don’t I build you an app? And it built me an app to do it. I had to process 40 different spreadsheets. It just said I could do this one by one for you as you seem to be thinking, but why don’t I build you an app? And then it took the task down from a day to half an hour. 

Carol Alexander:  

The thing about Claude is it just creates these artifacts all over the place, and you can’t relabel your chats, and you can’t delete them. And what you have to do is put them all in a project and delete the whole project if you want to tidy up some of those chats. But you can’t relabel them. When you’re looking back, you’ve got no proper index to know what you’ve done in the past. But then, for me, it was just hundreds of artifacts, most of which were useless, were being produced. This is one example of an artifact that was useful. But had you gone direct to the artifact build rather than let it do it in a chat, then you could have had a little bit more control over that article. You could have named it. 

Thomas Ormerod:  

I shall do that in future.  

Wendy Garnham:  

So that seems to relate more to sort of the value. How about the risks or the limitations? I’ll ask Carol first. What do you see as being the risks or limitations? 

Carol Alexander:  

Well, I mean, the singularity. You know? Artificial general intelligence, where the point where they get more intelligence than us in, whatever use, more powerful than us. I mean, I was just watching this podcast by what’s his name? I’ll just find his name – the diary of a CEO? He’s just done one with Roman Yampolskiy, I think. And he wrote lots of books. He’s completely mad as a hatter, I have to say. He believes, 100% that we are living in a simulation run by superior agents, you know. He sort of takes this idea of simulations to the extreme. And I mean, it’s true that there are some wonderful simulations. Apparently, Google 3, Google Earth 3, you can put yourself in the Grand Canyon and steer yourself or up the Amazon River. Steer your own route. And you can just imagine, when that becomes a 2-player game, you can be steering yourself up your Amazon river, and then your friend decides to put a Zulu tribe there or whatever. Not a Zulu, but you know what I mean. So you can sort of if you take that to the extreme, then yeah this simulation theory has some credibility. But no. I mean, I think what’s going to be happening is just as we become cyborgs more and more, right, I’m attached to my phone, and I do worry I lose it. I would much rather have it embedded in my arm, thank you very much. And other stuff like ChatGPT. I wouldn’t mind having that, as a sort of earpiece. You know, the Google glasses as well. We’re getting more and more like cyborgs. And then on the bioengineering side, you can imagine that, you know, it’s progress so rapid that human DNA or some brain cells could be put into a humanized robot. They’re getting quite agile now, although the Tesla one’s completely empty. But in Japan, you know, they’re getting some good robots. And so, you know, what we’re looking at is very high unemployment apart from plumbers, but maybe these robots could even be plumbers. But, you know, we’ve got a huge demand for construction, particularly if we’re going to go to war, usual thing that’s you know the economy rebounds because of building things for war.  

And we’ve been struggling since 2008 because of the bankers, and the bankers aren’t going away, although that’s another story. Anyway so, yeah, there’s quite a lot of risk. I wouldn’t take it to that extreme, but I do see large unemployment for a while as jobs change. So white collar jobs are going to be few and far between. Certain jobs like hairdressers. You know, plumbers cost so much and builders in Brighton are awful. You know, I mean, we should start offering proper apprentice degrees in some universities, not all our degrees, but some universities like Sussex, I think, would be ideal for that, and particularly since Brighton University used to be one of these wonderful technical colleges until Thatcher shut them all down and called them universities. We need to go back to that world where we are properly training, construction people, and then it wouldn’t cost an arm and a leg because there’d be a greater supply. 

Thomas Ormerod (25:58):  

I think Carol’s right to point to potentially dystopian future. If you think about it from an educational context, I think the big takeaway message there is fear. I think that because we’re learning the implications of the new technology, there’s likely to be levels of fear, amongst faculty and students around what what’s going to happen. What will my degree mean? How am I going to assess people anymore if I use essays that they can write on the computers? And I think that will be a transitionary phase, we hope. But I think we have to recognise that because I think that the anxiety that it’ll raise will be quite considerable.  

And I think that’s why this is a useful session, I think, because I think that the takeaway message from both Carol and I would be that there is more to be discovered that is useful than we should be worrying about. Yes, we should worry, but we should worry positively. Okay. If we’re going to change the world of work, what are we going to change it to? Carol’s example is let’s look at the construction trade, for example. When I was doing AI research back in the 1970s, I was only about 5 at the time. That’s not true. The big worry was that robots would take over all the blue-collar jobs that, you know, because we could have a vacuum cleaner that could sweep a house, that you could have a vacuum cleaner that could sweep the roads. You could do all the blue-collar jobs, exactly the opposite has happened. And it’s the white-collar jobs that are under threat. We must reconceive the world of work. We must think, what do we get people to do? In the same way that when the typing pool disappeared with the word processor, we had to reconceive. Well, what does what does a PA do now? Apart from, you know, buy the chairman’s birthday present for them. That we had to rethink what those roles are about. And I think that if universities have a place, it’s to do that job. We need to rethink the future. And as part of our curriculum, we need to be sending our students away realising that that’s their job is to rethink the futures of the trades they would be going into, not just come up with a bunch of skills that allow them to do what was good in 1973. 

Wendy Garnham:  

I think that also changes that idea of anxiety about ChatGPT and AI and where it’s heading, because I think you have to have a level of anxiety in order to be able to think about how it’s changing and how you adapt and so on. I think for me the danger is if we don’t have that anxiety and we just ignore it or park it and we think that we can just carry on as we’ve always done because that’s just how we do things, I think you know, sometimes having that level of anxiety means actually we are thinking ahead and we are planning ahead. I think that can be seen. 

Carol Alexander:  

It’s the right level though. You don’t want to do much. 

Wendy Garnham:  

But you need some. Otherwise, you’re just stuck in a rut of doing things the way you’ve always done, where the advances are happening alongside and the two never connect. And I think, you know, that for me is sort of a danger that you have, you know, this sort of disconnect between – I know it’s happening but I’m not going to change anything I’m doing – whereas I think that level of anxiety sometimes can be a push towards thinking ahead. 

Heather Taylor: 

I was thinking, well, about what both of you were saying, but something you said earlier, Tom, about researchers. If we didn’t have to spend ages figuring out our code or whatever it happened to be, so the researchers could do the more interesting thinking. And I think when I’ve spoken to people before about ChatGPT, I’ve said we should be teaching students to do the things that it can’t do. But really, I think of that as sort of philosophical questions that have got nuance. I don’t really think ChatGPT thinks. I think it tells me stuff. I don’t really view it as thinking in the way a human thinks. 

Thomas Ormerod:  

Think that what Carol does is a good example of what we should be doing, which is not expecting ChatGPT to think, but letting ChatGPT help us to think. I think when I’ve used it successfully, most successfully, I’ve had a dialogue when we always seem to get kind of ratty with each other sometimes. And we can’t help but anthropomorphise when you use it. If you get into that dialogue state, and where you’re asking it to critique you, then you can get some really positive thoughts coming out of it, I think. 

Carol Alexander (30:52):  

I’m very interested in my emotional reactions to it because I iterate all the time. You know, I start off with a prompt and it gives something, and then I look at it, and then I read what I want to change, or what I don’t like, and I dictate and press play, and then it goes on. And then if it doesn’t succeed after about three iterations to progress, I get cross with it. And I say you’re rubbish. I don’t like 5. Give me 4.1. You know? Thank goodness I’ve still got access to 4.1. And so I do choose a model. But then there are other times, and I have to say that 5, ChatGPT 5 does whir too much. You know, it’s slow. It thinks too much, even the instant and the thinking mini. But, anyway, I do like 4.0. I like 4.1. It’s more of a workhorse, and I praise it. I say, ‘oh, gosh. I love you, 4.0. You’re so good. Do you know ChatGPT 5 is awful? I wish they were all like you.’ 

Do you know that over 60% of the prompts to LLMs worldwide are on the human relationship side, i.e. asking for companionship, just basic chat, or therapy? Over 60%. I got that figure a few weeks ago from one of the, you know, one of my Google feeds, which, of course, only feeds me what I want. 

Heather Taylor:  

That’s quite a concern. But also because sometimes I’ll ask it, so I know loads about obsessive compulsive disorder, and that’s what I’ll quiz ChatGPT on, to see how good it’s getting and if it’s learning from me. You know? And it used to be rubbish. It’s a bit better now. But my concern with people using it as therapy is that there are specific models that are used to treat specific disorders, even specific sub-symptoms within disorders or themes within disorders. And using a different therapy for a slightly different disorder or whatever it might happen to be could cause a lot of problems. And it concerns me when people use ChatGPT for therapeutic reasons because there’s advice even with self help books to, if you need extra support make sure you reach out for it. You stop doing it. Uncomfortable feelings can come up. ChatGPT could just be talking pure nonsense to someone that could be, you know, making their problems worse. 

Carol Alexander:  

There was a big law case in OpenAI. A young American chap killed himself. And then around the time of the GPT 5 update, they put a patch on so that now it’s so much more conservative. So, you know, it’s a bit like HR – HR finds something goes wrong, and then they issue all these rules that you’ve got, you know, adhered to in your behaviour in the company. But then there’s going to be some other things that somebody does, outside those rules. So they haven’t changed the basic way that ChatGPT operates. But the good thing about Claude and the reason I got a subscription for a month, which I’d used for about a week, and that’s because Claude is different. It’s not a GPT. It’s an LLM that’s built on a code of ethics, sort of governance principles. It’s almost like a constitution that ‘thou shalt not do anything that would help people build chemical weapons’, for example, were a recent change to its constitution. I thought it was a better way of building an LLM and less prone to having to have these bits of string and elastoplasts to keep it functioning. But then there were all this other stuff that I then cancelled. 

Thomas Ormerod:  

I’m just terrified about the idea of generative AI becoming like HR. It’s not the way I want to see it going.  

Simon Overton (34:53): 

I’ve got a slightly more positive take on AI and it relates, if I may, to the work of art in the age of mechanical reproduction, which was, I think, an essay by Benjamin? Walter Benjamin. Anyway, the point of that was everybody was worrying that with mechanical reproduction, that works of art like Van Gogh’s Sunflowers would become devalued because everybody could get a copy of it and stick it on their, you know, bedroom wall. But the opposite happened. It didn’t decrease the value of the Sunflowers. It increased it. And I feel that a similar thing is happening. And I would say that a slightly more up to date example would be, the production of music and how these days people really value, for example, the experience of having their music on vinyl. And when you look at, I do a lot of YouTube, but when you look at that and you have music producers and all sorts of other content creators on YouTube, they’re very keen to state and to say that what they do does not use AI. I wonder, and I hope that maybe it’s, rather than devaluing the things that we most like about being human or the things that we value, including perhaps a more traditional form of university education, I wonder whether it would increase the value of it and people would appreciate it more. And then that perhaps is the direction that universities need to go in, to think people could do a Matrix and download the content into their head, like Keanu Reeves. But what can we actually do? What can we provide that is so much more human and so much nicer than that? And I think and I hope that is the direction. I wonder if you two have any thoughts about that. 

Thomas Ormerod:  

I’ll start. I think that is an optimistic view, and I think there’s some truth in it. I think the problem that you face is the volume issue. So, it’s a bit like the finest jewellery is very, very, very expensive, very exclusive, and you know it’s the finest because of the hundreds of hours that have gone into it. You go to Ratners and you’re getting a very different product. One of the implications of what you’re saying is that a lot of the volume work, the soundtracks to adverts, that kind of thing, is going to be done by AI because why wouldn’t you? Because it doesn’t matter whether what the ownership of the creative act is there. Whereas your Taylor Swift music is still going to be written by humans because Taylor Swift is a quality product, can charge premium fees. It’s almost in a sense the opposite of the democratising of it. That you then end up with a situation where an elite can afford human generated materials, but the rest of us basically gotta go to Ratners. 

Carol Alexander:  

Yeah. I mean, imagine people might prefer to have human accountants they have a relationship with or human personal trainers or human therapists, but, what LLMs are beginning to offer is that. For people that could not afford that sort of thing, at least a basic level of accounting or people in prison that don’t get any therapy at all, at least, you know they may get some help, or people on the streets.  

Thomas Ormerod: 

On the positive side to come back to this though, there are interesting and curious opportunities. In fact I just bought the Internet address for ain’t.com – AI, no thanks. Because I was talking to people who were working in recruitment and they were saying one of the problems they’re getting is that companies want an AI-free product to differentiate themselves from the fact that so much CV processing, etc, is done by AI systems, which we know introduce huge biases, ethnic, racial, gender biases, because they’re taking all their content from the world outside and the world is biased, they’re reflecting those biases in what they do. Being able to signal that you are an AI-free product is a bit like sort of being able to say, you know, gluten free or ultra-processed free or whatever. I think there will be a market, but it’s a niche market. And I think that what we have to face, is the volume issue that AI does it better at volume. 

Carol Alexander:  

And recruiters are in a terrible situation now because all CVs are perfect. They’re all written by ChatGPT, and they all look the same. Some of them even use secret words, and they, even though they didn’t go to Oxford or Cambridge, they put in sort of white type Oxford In the footer and then they get through the algorithms that way. 

Heather Taylor (39:56):  

I think, based on what Simon was saying, I do get your point, You know, you could have a smaller quantity and a higher value product, and only a few people could afford it. But I feel like going back to what you said earlier, Carol, about how you will upload these videos and then you go into your lecture and go, what do you want me to talk about? And then they get to decide, and you said that improves engagement and so on. And I can see that it would. Okay? And that’s also very confident and brave of you to do it that way, and it’s great. To just be able to go in and go I’ll talk about whatever you like. I’d love to be able to do that, but I just waffle like I’m doing now. But I think maybe that’s the answer. I think maybe Simon’s point is almost that, we do need to know how to use AI, and I mean, like, we. I know you two know. Right? I don’t really know. I just argue with it a bit. 

Carol Alexander:  

You should watch my YouTube channel. It’s Professor Carol Alexander, and the playlist, because there’s some other ones there, is called a No Coders Guide to Surviving Generative AI. 

Heather Taylor:  

Okay. I am going to look. I think we need to know it so that we can know what it can do, so that we know what to offer students in terms of learning how to do this themselves. But also once they know how to do it better than us, like you’re saying they will, we need to know what are the things that are going to be valuable or they’re thinking that’s valuable for them that AI can support, but it can’t replace. And I think it’s almost like you could have, at volume, it’s trickier, but at volume, more handmade, more bespoke degrees so that if we didn’t have to teach like, Jennifer Mankin will kill me for this, but she’s in our department, but if we didn’t have to teach R, okay, if we didn’t have to have whole modules fill up with teaching them how to code, and if we didn’t have to teach – if I didn’t have to teach how they’re going to structure their research report, right, and I could just focus on their thoughts, their nuanced thoughts, their individual thoughts about things that they’re writing about in there, or I could get them to come up with a new research idea or whatever it happened to be. If we could move away from these things because AI took care of it for us, I feel like it would be a more engaging degree for them that could still be taught en masse. Okay? You know, you’d have to have people that are quite specialist in certain areas to be able to do that, like you would be with finance, you know, to be able to walk in and go ‘what do you want me to say?’ 

Carol Alexander (42:39):  

How are you going to fill up an entire degree, and do you want to? If the jobs are not there, we’re not going to get students applying 

Heather Taylor: 

I didn’t think about it from that point. If the jobs aren’t available, the students aren’t going to come to uni for that reason.  

Carol Alexander: 

But if we if we append an apprenticeship where, it may be an electrician or whatever, you know, a plumber, as I said before. They can get jobs in that, but they want still to come to university as a transition between home and the real world, where they expand their mind. They expand their understanding.  

When I was Sussex as an undergraduate, I was a science student. I did maths with experimental psychology, and we all had to take an optional art subject. I took witchcraft. But the point is that we you know, so some students that are in engineering, for example, and they may be, you know, civil engineering, they may still have jobs there, but a lot of their degree would be taken up by them just expanding their mind and deciding, okay, I want to take a module in blockchains and crypto assets, or something like that.  

Thomas Ormerod: 

That’s interesting that the white paper that came out the day before yesterday, the government is moving us towards this kind of cafeteria style of learning, which has strengths and weaknesses. It or at least it has threats. It has huge threats to the model we have now, but it also has opportunities for people being able to equip themselves with the skills that will get them jobs. 

I think even at a finer grain than that, one of the challenges for us in academia is to work out what skills we want people to have. In psychology, the principal skill that we’ve thought people should be able to have is to take large amounts of information, crystallise it, pull out a point, critique it, and then design something on the basis of that. Write stuff, to put it simply. Nowadays you could argue the skill we should be sending our students away with at least one of the skills is editing. That, yes, you’re going to get the stuff out from ChatGPT. Your job is to edit that, to do the job you want it to do because it won’t be perfect. It won’t be accurate. It won’t be complete. We should have a course on editing.  

Wendy Garnham:  

I’m just thinking. I went to a conference in the summer where they were talking about how they saw the future of generative AI and one of their arguments was that eventually it will collapse in on itself because you’ll be entering information into it to feed that algorithm that it’s already given you, so it just it becomes less of a useful tool. But I’m just interested to hear your views, both of you, on that. 

Thomas Ormerod:  

There is a fix for that, which is that you would have ChatGPT 6, that is self-reflective. I already know that. I’ve seen how this gets used. In a sense, putting confidence levels on its information, and using those as part of its updating, not just updating on the basis that here’s another mention. I think that is a genuine problem with our current understanding of the technologies. But once it becomes a problem, I think it could be fixed. Don’t you? 

Wendy Garnham:  

Looking five to ten years ahead, what psychological or behavioural changes might we expect from the widespread use of generative AI in everyday life? Tom. 

Thomas Ormerod:  

Well, optimistically, we will see people feeling more empowered because they know that they have a resource that can solve a lot of their everyday problems and that they can do bigger things, better things, that the technologies will have evolved in a way that makes them much more interactive in ways that the technology can start helping you to construct your ideas, construct your arguments, deal with your problems. That’s the optimistic end. The less optimistic side will be behaviourally that we begin to shut down because we lose confidence that we have anything to offer. And we do less because other things can do more than we think we can do.  

I’ve just finished a project funded by the Educational Enhancement initiative in the University which wasn’t looking at generative AI, it was looking at plagiarism and what it is that encourages people to plagiarise. And we did an intervention where we gave people essays to mark, half of which had been plagiarised, half of which hadn’t. And we found quite quickly that people gave the plagiarised ones better marks because they looked better. The text was better. And then we showed them what we would mark them as, and how the plagiarised ones did really badly, and there was an immediate effect of, but we measured the Turnitin scores, and they went from like an average of 15% across the cohort to 2%. And I think that there is this issue that people will lose confidence in their own ability to generate good material. And as academics, we must counter that. And we say, do you know what? They put an elephant in the room when you don’t want one. You’re writing, even if it’s colloquial, even if it’s full of spelling mistakes or a few naughty words, we like that better. We prefer you to do that and think for yourself and show that you thought for yourself than you get your grammar right, like what not what I’m doing now. 

Carol Alexander: 

I mean, I just think that it’s going to act as a sort of magnifier of inequality. There’ll be those that don’t even use it at all, and there’ll be those that just give up because they use it but they don’t feel that they can add anything by using it. And then the few, the elite who managed to increase their productivity so that what used to be done in a day is now done in an hour, are the ones that are going to thrive in every way. So, yeah, inequality is just getting worse. 

Heather Taylor: 

And if I watch your YouTube video, I can become one of those elite. 

Carol Alexander:  

That’s the idea. 

Thomas Ormerod: 

The challenge there that I’ll set you, once you’ve watched that and given that you’re educators, is if you’ve been in prison for fifteen years, you don’t really even know what Microsoft Office is. You don’t know what email is. You don’t know what the Internet is. How will we use generative AI to help people rehabilitate themselves into society when they’ve got that huge gulf in their knowledge? I say this because I’m on the parole board and I see these people coming up once a week when I’m in a parole panel. And they’re fascinated by our use of computers. We say to them, we’ll be looking at two screens. And one of them says to me, what’s a screen? What’s a keyboard? You know, so these people have been in prison for, like, years and they’ve gone in from the age of sixteen. And they’re, like, forty-six, thirty years in prison, and they don’t know anything about this. It’d be really interesting as a kind of difficult problem to solve. How can you use generative AI to get somebody back into the community After being out of it for so long? Don’t know the answer, throw it out there. 

Heather Taylor: 

It’s a great question.  

Wendy Garnham: 

Yeah. How do you start with that one?  

Heather Taylor: 

I would like to thank our guests, Carol and Tom. 

Carol Alexander: 

Thank you very much. 

Heather Taylor: 

And thanks for listening. Goodbye.  

This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes as well as articles, blogs, case studies, and infographics, please visit: Podcast | Learning Matters 

Tagged with:
Posted in Podcast, Uncategorised

Episode 9: Inclusive Online Distance Learning

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our ninth episode is ‘inclusive online distance learning’ and we hear from Sarah Ison and Brena Collyer De Aguiar.

Recording

Listen to the recording of Episode 9 on Spotify

Transcript

Wendy Garnham: Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is supporting Online Distance Learning, and our guests are Sarah Ison, Librarian for Online Distance Learning at the University of Sussex, and Brena Collyer De Aguiar, Senior Learning Technologist for Online Distance Learning. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome, everyone.

Sarah Ison: Hello.

Brena Collyer De Aguiar: Hello. Hi.

Wendy Garnham: Today, we’ll be discussing how Brena and Sarah support online distance learners, the challenges and opportunities of teaching in a global online environment, and what lessons we can take from Online Distance Learning (ODL) into wider teaching practice at Sussex and across the sector. 

Heather Taylor: So, Brena, to start us off, could you tell us a bit about Online Distance Learning at Sussex and your individual role within the Online Distance Learning team? 

Brena Collyer De Aguiar: Yeah. So we have a number of master’s online courses, and also the PG cert. So they are 100% online courses. My role is I’m an Online Distance Learning Senior Learning Technologist. And I’m mainly responsible for liaising with academics around course development. So when they develop the courses and the modules, supporting them with academic, with assessment and pedagogic advice and design as well, you know, on trying to promote innovative learning experiences. I’m responsible for designing the academic training, so the essential training they have to attend when they are involved in ODL. And I also do a lot of reviews in terms of accessibility and make sure our courses are accessible.

Heather Taylor: So, Sarah, same question to you then. Can you tell us a bit about your role in the Online Distance Learning team at Sussex? 

Sarah Ison: Sure. So I’m the Online Distance Learning Librarian, and I’m dedicated to supporting students with their information literacy, their research skills, referencing, and helping them to avoid academic misconduct, hopefully. And I offer one-to-one sessions with students that want a little bit more support tailored to whatever they’re working on. And I provide recorded sessions and live sessions throughout their module so they have a chance to drop into like a session just to ask questions or come to my kind of launch session, which gives an overview of the way in which I can help support them. And so I could also support academic staff if they’re developing new modules, help them get their reading list together, and have some colleagues based in the library who actually get the reading lists embedded in Canvas and make sure that they’re working. And I also sometimes run bespoke sessions within a module on the request of the academic just to tailor something specific in specifically or at a certain point in the module to help really encourage the students to think about something in particular that relates to my remit. So, you know, library skills, information seeking, finding scholarly material, that sort of thing.

Wendy Garnham (03:18): You work with a diverse global cohort of students, many of whom are international or mature learners, sort of balancing work and life commitments and coming from a range of cultural backgrounds. How do your roles help foster a sense of community and belonging among students who are studying remotely, often across different time zones or cultures and age groups? 

Sarah Ison: So I teach kind of once a module. I offer a session to all new students, whatever subject they’re studying. I try to schedule it in the middle of the day to sort of meet whatever time zone they’re in. Obviously, that is quite challenging, and it’s an opportunity for students to come across each other, just in terms of at the start of a session, they’re quite informal. And I encourage them to put in the chat where they’re from, what’s the weather like, how are they feeling? Put an emoji and kind of reflect how you’re feeling at that moment in time. Just a couple of minutes at the start to just kind of do a little icebreaker, and just try and make it a friendly space, really, where people can ask questions throughout. They can either raise their hand formally in Zoom or they can just unmute and throw out a question or put it in the chat. And, yeah, I just try and make myself kind of really approachable so students feel comfortable to ask questions as we go along or follow up with a one-to-one if they need. And, yeah, it’s a challenge because of all the different time zones, and it’s amazing the range and variety of people that study on ODL. You’ve got people working full time, part time with caring responsibilities. They’re studying their second or third language. I’m in awe of what they are juggling to commit to completing their degree or their postgraduate studies at Sussex. It’s incredible. 

The Canvas site that I look after is called the Study Online Student Support Site or the SOS for short. That’s a space where students can utilize the discussion board area and come together across courses. So in their modules, they’re just meeting other people on their module. But on the SOS, they can throw out a question, get a discussion going, and connect with their peers who are studying on different topics, which is really good. And I’m always looking for ways to kind of build on that sense of community and just give students the opportunity to kind of meet each other but within my remit of supporting them. So, one thing I do is provide top tips every Tuesday. We have Top Tip Tuesday and Titbits Thursday because we needed another word beginning with ‘T’ for Thursday, where I try and drop in timely relevant tips depending on the week of the module that just helps them, whether it’s focusing on how to get your head around a long document, a big scholarly article, how to condense that down, note taking tips, other top tips that just help them as busy learners who are juggling a lot. I’d like to give them time saving tips and things and just help them get things done efficiently. So a lot of my focus is if you watch this video, it’s only 20 minutes, but it will save you lots of time in the long run. So, yeah, really trying to support through time saving techniques and making it relevant at the right point in the module as well.

Wendy Garnham: Just on that note, you mentioned the discussion board. Do you get quite a lot of buy-in to the discussion board? 

Sarah Ison: Not masses. So that is something that I think can always be encouraged. And if the usage dips a bit, then I can promote it a bit more. So that’s a good reminder that I can keep on promoting that because it just gives them a chance to share things. But the students do set up their own WhatsApp groups. So they do kind of get their own discussions going off official university platforms. So, you know, we don’t always know what’s going on there. But, yeah, I think Canvas is the main place where we can encourage community and coming together in different ways.

Brena Collyer De Aguiar (07:05): So I work much more with the academics, and I would say that my role, so my support goes into two phases. So one is the design phase, where, you know, I will support academics on thinking about making assessments more authentic. Trying to bring, you know, and make it very clear their space for the students to bring their own experience, but also what’s related to the place they live. We have large cohort courses. We have an MSc that is Sustainable Development, so it’s a very big number of students. And how do you also connect those experiences? The other way I try to bring this is trying to promote this conversational tone in the content itself. Like, trying to bring the teaching presence even, you know, when they’re doing their synchronous learning. And then it moves into support the academics during the teaching phase. So I lead, I’ve designed the training, the essential training they have to go. So it’s how, how do you set expectations for academics on how do you make the module live, how do you communicate, how do you set your expectations. So all the live sessions in the ODL modules should be recorded, and that’s to promote a second moment for engagement for those that due to their time zones or work commitments can’t join the live, etc. But also putting some tools in practice. So we do use, you know, we have the VLE – Canvas. We use, we organize the cohort according to time zones just to kind of be easy for academics to deliver live sessions that are according to their time zones. We also promote discussion boards that are set by groups just so they can, you know, communicate between themselves. And of course, some tips on design, you know, like, like Sarah uses as well. Having the discussion boards for introduction that allow them, display a place for connecting through their identity. So discussion boards that are fun or ask them to post pictures of their bookcase and try to guess, you know, who else is reading the same or who could be that person. So that’s kind of how I try to promote those spaces for the students through the design and support academics.

Heather Taylor (09:24): So, Sarah, online engagement can be a challenge, especially as many Online Distance Learning students and mature learners are balancing work and study. What approaches, whether through course design or student support, have you found effective in encouraging engagement and motivation? 

Sarah Ison: So I’ll focus probably more on student support. And, as Brena said, she’s not directly student facing. And I’m probably the only one in the Online Distance Learning team at Sussex who is seeing students regularly. And so which is nice because then I get feedback and get quite a lot of dialogue with them. And there is a Student Success team that also provides a lot of support for them, very personalized. They call up the students, check how they’re doing. And I liaise with them to make sure that they’re aware of any updates I’ve given to the SOS or like with AI. There’s a lot of relatively new content that’s gone on the Sussex website that I’m drawing attention to in my sessions. So I’m telling Student Success, this is what I’m telling them so that, you know, if you get a call from a student, you know, to talk about progress and they’re like, “Oh, AI, can I use that?” then they won’t be like, “Oh, what’s the latest advice?” You know, I have a good sort of bridge between them to make sure we’re all singing from the same hymn sheet. And, so, yeah, I support through kind of the session and just being there for students and providing drop-in sessions throughout the module. And the one-to-one element really works well. Obviously, you can’t provide that to everyone, it would be impossible, but so far, it’s not become overwhelming. And there’s a student I’ve engaged quite a lot with who has come to realize that she thinks she’s got ADHD. She’s awaiting a proper diagnosis, but we’ve talked a lot about how to, well, I’ve been trying to help her how to compartmentalize the different things she has to do because she gets so overwhelmed by the amount of reading, the amount of learning she has to do. So we’ve worked together to try and break it all down and think about how she can approach her module. Probably goes slightly beyond my remit, but in terms of managing her readings and the library work and the research skills. She’s like, “Life hacks! Give me life hacks!” I’m like, “Okay. Here’s some top tips from last Tuesday”. But, yeah, we talked about how she thinks her brain works. And that has really helped me think about how can we be inclusive. How can we just offer stuff from the get-go, from the beginning that is accessible to everyone, no matter if they’re neurodivergent or neurotypical. How people learn is so varied that the way that we and the team work together really helps each other because that really informs the, like, the handouts that I provide, you know, the online resources, and what I put on the SOS. You know, I’ve got that in my mind, “This has got to be accessible and inclusive”. And, yeah, trying to provide videos at the point of need that people can dip into and won’t take too long to go through is kind of a key way that I provide support to the students. So, yeah, trying to do the personalized thing one-to-one is really important, but you can’t do that for everyone. And so, therefore, the more generic videos, you just try and make them as inclusive as possible. So that’s kind of my main methods of trying to do that through videos, the SOS, and one-to-one and live sessions.

Heather Taylor (12:24): You know what? I think, you know, like you said, “But you can’t do it for everyone,” and you’re completely right. But I think importantly, you know, and on theme, it isn’t for everyone, is it? Some people don’t want one-to-one. Some people, you know, prefer to just, some people prefer to absorb information relatively passively. Some people want to engage with it and, you know, in different ways. So, actually, yeah, that’s great.

Brena Collyer De Aguiar: I think for ODL students, the design of the modules, so we have a very strong element of consistency across all modules and pages, which then reduces that cognitive anxiety they would have, you know, on trying to look for the information. And also thinking about, you know, it’s student-centered, so it’s thinking about removing as many barriers as we can. So, like, you know, in most of the modules, for example, would have links directly to the guides and to the Study Hub and signposting the resources and examples. For example, we really work with templates, templates for submissions, templates for information. I’ve been lucky enough to be involved in terms of engagement in a couple of projects that we used. We gamified a module, for example, and that kind of made them travel in time, so that was a very engaging experience. So, I’ve, like I said, I am on top of UDL fun. I’m a playful person, and I, you know, I love immersive experience. So every time I can put my finger there, it’s like, “Let’s do something fun”. Of course, you know, like I said, in terms of consistency, not only in the layout, the learning is structured. So they have three different learning phases, which they are encouraged to, you know, go through the lecture content, go through applying those, the learning, but also reflecting on that on top of having spaces for creating community, engaging with, you know, throughout the cohort. Yeah. And flexibility. You know, you can, you can show your presence in different ways and also, you know, in different formats. 

In terms of engagement, that’s the only thing that I don’t know if I should say this, is the tutor presence really makes an impact. We have, for example, I’m telling you about a lot of stuff in terms of design. But if the tutor is not engaged, if the tutor doesn’t show his presence on the discussion boards, for example, try to promote the, the discussion, you know, to continue or use announcements in Canvas and show that that module is live and that the teacher is there, you see a huge, you know, difference in terms of engagement. In opposite, if you have a tutor that is really there, and I know the challenge, you know, and we have large cohort modules, which then we require bigger teaching teams, you see that the experience is completely different. So it’s a mix of putting the, you know, designing but making very clear that the human element is there.

Heather Taylor (15:23): Yeah. I think they’re so important. I think even when you’re, you know, when you’re lecturing in person, obviously, like, if I’m lecturing in person, I have to be there and engaging because I’m there. But being just even being enthusiastic about it makes a big difference to the students. They’re not gonna get enthusiastic about something that I sound bored of, right? And I completely get what you’re saying with this, the tutor engagement. If the tutor is not engaging with the with the, you know, online resources and discussion boards and so on that you’ve got, the students essentially are probably gonna think, “Oh, this isn’t really necessary”. “You know, if they’re not doing it, why would I do it?” you know? So, yeah, that’s quite a challenge, though, for you then because then you’re sort of, you know, you’ve made something good. It just needs to be delivered the way you intended it to be delivered. 

Wendy Garnham: I love the idea of the gamification. That sounds really good.

Sarah Ison: I was very emotional when you launched that module. It was at the Educational Festival, wasn’t it? And you worked, you collaborated with the academic, and it was amazing. They did, like, this preview. It was like a film premiere. I was really like, “Oh my gosh. I want to study this module”. And I’m not even, you know, I’m interested in Sustainable Development, but it’s not my thing to study. And I was just like, “Wow, this is amazing”. And I felt so proud of you because it was such a different sort of module, completely different to gamify that. And it’s had some amazing positive reviews, isn’t it? Yeah.

Brena Collyer De Aguiar: It is. It’s called, we renamed the module. We called Project Dandelion. And, you know, when you, when you think about, and I know that I again, most of the immersion in that gamification was made through storytelling. So it’s, we really thought the narrative to bring to put the students into that story. We also, the thing that I thought was really good on that as well was because the module was quite depressive, because we were talking about the climate change and waste, etc. So it was a way to actually, how do you say? Swift. Instead of the end being bad, we said, “Okay, we sorted it out. Now you need to travel back in time and figure out what we’ve done”. So then you can see that they are much more in a much happier place, you know. They connect to the emotions and what they could do. Storytelling. Storytelling. And it’s, you know, everyone, we are all, don’t they say that since, you know, the beginnings, every human being is a storyteller? No, we grew telling stories. I love that, so I, I need to stop talking.

Heather Taylor: That is so, so clever though, when you say it, though, because it’s depressing. But you really don’t want to end a module with the world being on fire, do you? You know?

Wendy Garnham: So ODL students at Sussex show strong academic outcomes, including low levels of academic misconduct, I’m pleased to say. How do you support students’ academic literacies, particularly for those returning to study or unfamiliar with the UK academic context? 

Brena Collyer De Aguiar (18:23): All academics involved on teaching an ODL module, they need to attend what we call the ODL essential training. The first one, it’s about the teaching, and the second one is about marking because, you know, they are all online, so it’s a bit different than what we do on campus. And on the essential training, I bring, you know, the awareness, so it’s user-centric. It’s making sure they understand that they are dealing with people that are around the world, that they may be too far from, you know, the academy, or if they were close, like me, for example, struggle with the referencing here. You know? Just to make them very aware that those challenges are present. They can’t just assume that they are UK-based students and signposting the resources. So where would you go for support or, you know, flagging that Sarah is there, the module is there, as well as I’ve mentioned before, in the syllabus page and then kind of important places throughout the modules, that always link to resources and to further module information as well as support. The other support moment we have is every time a module, ODL module runs, we have the module survey. And that kind of triggers what we call the module evaluation is where we will be sitting, you know, the development team, the pedagogic advice team, the academic with the survey and reflect further. So every time there is an academic literacy challenge or, you know, it’s always going back and thinking, do we need to signpost more resources? Do we need to have Sarah stepping in into the first welcome live session to point them, you know, how to access those specific resources at the library? And hopefully, you know, addressing the challenge and improving every time the module runs.

Sarah Ison: In the last couple of years, I embedded a few new slides into my live session that I do in the first week of every module, acknowledging the cultural differences when it comes to referencing and what people’s experience might have been in whatever geographical place they’re in. Because one of the academics on ODL told me once how all the way up through her master’s, she was in South America, and she never had to reference, like, anything. It just wasn’t done, you didn’t need to, you could just write, and that was your essay. You handed it in. Fine. And how different is that here? And so I’m then, it just made me think, well, I can’t assume that all the students joining know the UK Western way of referencing and exactly how it should be unless it’s made really explicit. And academics and people like myself, I think, in a support position have a really important job to make that as clear as possible from the beginning. So I see it as a privilege that I meet these students. I try and make it really clear that what they’ve been used to might be different to what’s expected now. So I try and be really clear about, you know, academic misconduct, what that actually means and how to avoid that and the importance of referencing and how you must cite your sources. You must say where those ideas came from. And I really kind of drive that home. 

And I was able to kind of adopt the university’s academic practice online workshop. So that’s a Canvas site that students get referred to after an initial case of academic misconduct. They then have to work through it, understand what it means by collusion and academic misconduct, etc. They have to work through that and then it’s a quiz and they have to be checked off that they’ve done it. So I was able to offer that from the beginning. I promote that to the students. They’re all, all ODL students are enrolled on it. They’ve got access to it. We’re in the process of updating it at the moment with other key colleagues from the university. But using it proactively to say, “Have a look at this”. “You know, you might end up coming back here if you were accidentally, you know, plagiarized or something”. “But here’s the information”. “You know, it’s optional, but please do, you know, invest some time in understanding how to avoid academic misconduct”. So trying to be strategic and always mentioning about referencing the importance of it. But I think it was really important to address the cultural differences and not assume everyone knows what’s going on just because they’ve enrolled at Sussex. They don’t necessarily know what’s expected when it comes to that because it might be so different to what they’re used to. 

And, we now have a dedicated Online Distance Learning librarian inbox, so students can just fire off questions, and it’ll be covered by me or a couple of my colleagues in the library. There’s always a place students can come to ask. As most librarians will say, if we don’t know the answer, then we’ll probably know where to find the answer and connect you with the information that you need. That’s one of the best things of the job. One of my colleagues does say, she doesn’t quite know how I get so excited about academic skills, but, you know, she’s like, “I feel really inspired now to think about, you know, reading and writing”. And I just, I get excited when I connect people with the information that they need. And a lot of my job is signposting, helping people find what they need, to achieve what they need, but also to avoid important things like academic misconduct.

Wendy Garnham (23:11): I think that is important not to make too many assumptions about what people are bringing to the learning session. So I think it’s quite easy when you’ve been delivering content for some time just to assume that there are certain sort of basic levels of understanding that may not necessarily be there, certainly with quite a diverse cohort. So it’s a good reminder.

Heather Taylor: Yeah. Brena, what are the key things you’ve learned from supporting Online Distance Learning students, and what advice would you offer to academic staff, whether teaching online, in hybrid mode, or face to face about inclusive practice, engagement, and student-centered course design? 

Brena Collyer De Aguiar: You probably can sense. I’m not from the UK. So I, I’ve linked my experience with my work as well in terms of diversity and coming from a different cultural background. And I would say that for me, one of the things that I’ve learned more is about expectations. So online learning, you know, requires a lot of expectations to be set. And this should be part of the design from the beginning to the end. Expectations in terms of time that you’ll be spending on that learning, expectation about how, language, you know, technical expectations as well. From the expectations, making sure that the design and my thinking and my advice really brings inclusion as the focus. Like, let’s think further. I was aware, already applying to my kind of other software design work, but now it’s an accessibility that goes beyond. And to understand and probably showcase that, you know, by having the content accessible, you’re not only benefiting students who need, you’re also promoting different ways, you know, for everyone to access the information. 

In terms of our advice, and those kind of link very well with the UDL principles, you know, it’s for me, again, being from Brazil, we are all about emotions and feelings. So it’s trying to bring some empathy, pedagogy to your practice. You know, try to promote connection and presence. And when I talk these usually, I flag that this also includes the tutor. So it’s about enabling, you know, that space for everyone, not just the learners, but everyone involved on that caring learning community that, again, enables a better learning space. Joy and fun. I’m all over playful learning, immersion, storytelling. There are, you know, you don’t need to be a pro. You, you know, there are small things you can add to your learning and teaching practice that can make learning more fun. And I would say not promoting ourselves, reflect and develop, you know. I would say if you could and I know some module leaders would do the reflection session in the end with the students to bring their perspectives, what they missed. But as well, you know, connect with your colleagues that had similar experience, and, of course, the resources and the training available.

Sarah Ison: Yeah. I feel I feel like I’ve learned a lot about, just the importance of accessibility and learning design and how that relates to being inclusive. And a way in which that’s been more personal for me has been working with people who are neurodivergent and have really clearly explained to me what helps them work well in a team, what helps them work when I was managing them kind of in that relationship. And then that’s helped me think about, “Okay, what about neurodivergent students?” How are they going to see what we’re offering and what small things can we do, as Brena said, “What small things can we tweak that actually makes all the difference to those learners and is accessible for everyone?” It’s not making it harder for neurotypical people. It’s just making everything easier for everyone. And so that’s kind of my key thing for this question, really, that considering how other people learn and take on that information. If you think about that in terms of what you’re asking them to read, if you’re an academic and you’re building a module, you know, think about how you’ve laid out your reading list material, how clear you’re being about what the expectations are. You know, if there’s a group assignment, how that’s going to be, you know, processed by someone, you know, that’s not you. Putting yourself in someone else’s shoes is so important to just trying to be inclusive. And as with Brena, I’m haven’t done a lot myself, but playful learning is such an amazing way to learn. And so I want to kind of find more ways of doing that in the opportunities that I have through working with ODL students. But, yeah, small things make a big difference. So remembering that as you go forward will hopefully be a good thing.

Wendy Garnham (27:52): It sounds as though one of the real strengths is sort of being able to draw students in to sort of experience what they’re learning about or to at least, as we’ve said, like, immersing them in that. So I guess, so is that more difficult with Online Distance Learning compared to when you’ve actually got students in the room with you? 

Sarah Ison: There’s so many tools we can use now to just break out into groups and to collaborate in so many different ways using Padlet boards and other kind of tools that just help you just get ideas together to work together. And we have this amazing relationship in our teams where although we’re made up of three quite distinct groups of colleagues and our work doesn’t always overlap, we have this brilliant, like, connection where because we have short regular meetings as a whole team, we use all these different tools that we could use when working with students as well. You know, I’ve used a Padlet board sometimes where I’ve said, “How are you feeling, students, about you at the start of your module? This side, put up emojis and pictures, whatever. If you’re feeling a bit, you know, negative about stuff, if you’re feeling good, work on this side”. And then at a glance, I could get people’s instant reflections on how they’re feeling about what I was about to talk to them about, which helped me to then possibly adapt my approach to the rest of that session. So you have to be quite flexible and fluid and be able to perhaps, you know, stray a bit from what you might have been planning. So if everyone suddenly says, “I’m really scared about reading long articles because I’ve, you know, had a break in study, and I don’t even know how to read an academic article”. You might think, “Oh, okay, if 20 people are saying that, I better focus on that in this session and maybe talk less about reading eBooks or something”. You know, there might be different things you can focus on. And when you bring that interactiveness into a session and you get other students’ input, it shapes what you do, but they feel listened to and that’s, you know, inclusive.

Wendy Garnham: Yeah. Sarah Ison and Brena Collier de Aguiar, thank you so much for being with us today. Thank you. And thank you for listening. Goodbye.

Heather Taylor:

This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes as well as articles, blogs, case studies, and infographics, please visit blogs.sussex.ac.uk/learning-matters.

Keep listening for some bonus chat about today’s topic. 

Heather Taylor (30:32): I guess as well, you know, I think these students have decided for whatever reason, they want to do an online course, and I think that can make a really big difference in terms of when we’re trying to deliver something. I think the problem sometimes with having online resources for in-person students is they are quite passive with it sometimes, you know, whereas these students are choosing to do this.

And I think that, you know, remember that year when we was all with the COVID? Remember COVID? And, anyone who signed up to come to the uni knew right from the beginning, it’s going on. It’s all gonna be online, right? I think they knew from the outset, didn’t they? Yeah. Because of the timing of the government or whatever. But, I’m not allowed to mumble. Sorry, Simon. Yeah. And I think that, actually, that year, and it really made me think about it because you both talked about empathy. So you were talking about empathy. You were talking about the importance of connecting, listening, responding. And I actually think that year with the with the students who were all online, I don’t feel as though I had any less of a bond with them or any less of a rapport or, and I think it’s because we all signed up to the same thing, right? And I did tell them at the beginning, “I haven’t taught online really before, I want you to get the best out of this. In order for you to get the best out of this, you gotta work with me, you gotta help me – you gotta get involved – you gotta use the chat, you gotta use the discussion”. And they were so committed. They were so lovely, weren’t they? 

Wendy Garnham: They were really engaged.

Heather Taylor: So I think, yeah. And I think, you know, I don’t know. I think that’s really important, though, making, you know, it’s really lovely that we have these online courses because for some people, that is just better. It’s going to suit them better. It’s going to suit, you know, around their schedule or just even the way they like to learn. So I’m really pleased that you’ve not just and I know you wouldn’t do this anyway, but I’m really pleased that, you know, you didn’t just take something that was a standard course and go, “We just say it online”. You know? And you’ve gone, “All right, how can we make this good?” “How can we make it engaging?” “How can we make it meaningful?” “How can we connect with the students?” And I think that’s such a, you know, fantastic thing, basically.

Wendy Garnham: I think just making the students feel heard is like the absolute bottom line because I think also when you’ve got big groups, I mean, it sounds as though you have got quite big groups for the online courses. And, you know, as great as that is, it’s still like, the bottom line is, how do you then make all of those students feel heard as individual? And I think some of the things that you shared are really sort of going to promote that sense of, you know, “I matter, I was seen in this course or, the tutor’s hearing me”. And I think that is, it’s quite hard to get to that level, but I think it’d be useful no matter who’s teaching what, in what context. I think that’s sort of quite a key message for me to take from that.

Tagged with:
Posted in Podcast

Innovation and best practice: Insights from Life Sciences in 2025

I had the pleasure of joining the Life Sciences in mid-December for their teaching and learning away day (well, it was a morning, but there were biscuits!). I love attending these kinds of events. In my role as an Academic Developer, I have the privilege of working across disciplines and seeing colleagues’ hard work and innovative practice bear fruit. My magpie mind also gets the opportunity to pick up lots of sparkly insights and examples I can stockpile and share with colleagues from other Schools and Faculties as examples of great practice.   

So, here are some recent gems from Life Sciences. 

Embedding Employability – A student perspective

Greig Joilin and Valentina Scarponi shared insights into the impact of changes made to the Life Sciences curricula to embed employability skills and better enable students to identify and articulate those skills through their degree and beyond. These changes, which began roll out with the 2022/23 cohort, have included adding in dedicated skills modules through years 1 and 2, incorporating teaching and assessment tied to disciplinary areas, careers skills sessions, employer panels in each year, plus sessions on work experience.  

Greig and Valentina surveyed students at the start and end of a skills module that includes CV writing and mock interviews (marked and feedback provided by AI tools, CV360 and BigInterview). At the end of the module, students reported recognising the importance of these career skills and that the new modules are helping them to develop them. 

 When asked what skills they still need to develop, communication and confidence remained high on the list. Therefore, the next steps under consideration are to do more to scaffold oracy skills through the curriculum. 

Reflections from Portfolios for Biomedical Science Students 

Lorraine Smith also talked about the impacts of encouraging students to evaluate their achievements and skills developed across their 2nd year via a reflective element in a portfolio assignment. Having provided students with lots of guidance on reflective writing from the start of the year, including YouTube videos of students talking about reflective writing, a workshop then took students through the process and provided safe space to have a go and engage in peer feedback. Importantly, using her own experience as an example, Lorraine started the workshop by talking to her students about her own experience of reflective writing (like Lorraine, I also find it uncomfortable!) and shared with them an example of her own writing and invited peer feedback from students.  

The value of the reflections went beyond those for students themselves. Lorraine reported that the submissions gave her insights into the impact of the new Life Sciences curriculum on students’ development of, and ability to articulate, employability skills plus insights into how they had interpreted and acted on feedback they’d received along the way. 

Embedding AI Literacy into the curriculum

At the end of the morning, Greig Joilin returned to the podium with a call to action to colleagues to plan how they will weave AI literacy into Life Sciences curricula.  This need includes but goes beyond learning about the effective and responsible use of generative AI to helping students understand how all forms of AI are currently being used and will shape disciplinary practice in the future. 

Investigating factors in large group teaching

Alex Stuart-Kelly shared the outcomes of an Education Innovation funded project, undertaken with Oli Steele from BSMS. Their work has provided some original and nuanced insights into the value of active learning in the classroom and into ways in which students prefer to participate, and those they would rather avoid. In short, they found that all active teaching approaches improved engagement, as did the use of clear narratives, sections (e.g. linked to specific session learning outcomes), recaps, and varied formats. Students also like low stakes opportunities to participate in the lecture, particularly when either social or anonymous (i.e. don’t require them to speak out individually) and integrated with feedback, which varied in difficulty.  

Alex and Oli will be publishing their full study in due course. Until then, see Educational Enhancement teaching methods guidance and information about teaching tools that can be used to support student participation in the classroom. 

Improving 3rd year project supervision and skills learning 

Doran Amos shared insights from his investigation into student experiences and learning through their final year projects. Life Sciences students can choose to do an experimental project, a public outreach project or a critical literature review. Because of this, while all project supervisors work with a number of students, the nature of the work and how students and supervisors collaborate, varies.  

Students identified pros and cons of individual and group supervision meetings. For example, while they value finding out how their peers are progressing and learning they aren’t alone in dealing with challenges or concerns, they also valued the opportunity in 1:1 session to ask questions they might feel less confident about voicing in front of others. Doran’s suggestion was to ensure that all students are offered a mix of group and 1:1 supervision meetings. 

Accessibility and student experience in labs 

Kristy Flowers, Life Sciences’ Senior Technical Manager for Teaching, shared the incredible work she and her team do to make lab work as accessible as possible for all of their students, including enabling a partially sighted student to engage in lab work by creating raised line and braille labelled diagrams and 3D printed slide preparation guides

Perceptions of feedback: the use of self-reflection to improve student satisfaction  

Jo Richardson shared outcomes from an Education Innovation funded research project led by Sue Sullivan from Psychology, in collaboration with Jo and colleagues from the Business School, Psychology and Sociology. The study found that students who were asked to complete a short reflective quiz on their engagement with a module (e.g. attendance, completing reading etc), who then received marks in the 50s were much more satisfied with their feedback than students who hadn’t done the reflective exercise. Learn more about the projects and the implications of its finding in their recently published research article. 

Heroes in a half shell 

Using the lovely example of a group of students who have adopted and developed a project investigating winkle populations (half shells – geddit!) on the Sussex coastline, Kevin Clark, talked about the importance of nurturing students’ curiosity and enthusiasm and the value of supporting them to just get on with projects that interest them.  

Building the STEM ambassador network for students

Haruko Okomato talked about membership of the national STEM Ambassador Scheme, which is supported by UKRI, and how the scheme’s online community portal could be used to develop and support a staff and student network of volunteers at Sussex. Also – they provide free DBS checks! 

If you would like to learn more about any of the examples of practice outlined above then look out for future case studies. In the interim, everyone who spoke at the event is happy to be contacted.  

Posted in Blog

Cross Cultural Conversations about AI in Education (2022-2026): University of Ghana and University of Sussex

by Akweley Ohui Otoo and Kate O’Riordan

As part of a reciprocal mentoring project with colleagues at the University of Ghana we have a reflection on cross cultural experiences of AI in education.

Introduction

Generative AI has a longer history, but 2022 marked the popularisation of generative AI, when user interfaces were made widely available. Take up of generative AI has exponentially increased since then, the implications and impacts of which are still emerging. However, some things are clear. Like the web moment of the 1990s, the web 2.0 social media moment of the early 2000s, and the pandemic driven adoption of technology platforms across new areas, this is an exponential expansion that will continue to have major shaping effects, even when there is some fall off and retraction. 

In the UK, public discussion about generative AI moved very quickly into education, particularly higher education. Media coverage of major news outlets in the UK in 2023 show a very high incidence of references to university students. Higher Education is not always mainstream news in the UK, but generative AI discussions in the news brought discussions of HE into mainstream coverage. These usually focused on assessment, and the debate was characterised by concerns about students cheating and assessment design and, less frequently, about AI marking. In Ghana, public discussion was triggered even earlier, particularly when Google AI opened in Accra in 2018. AI has been central to government and business discourse about innovation and economic growth, and generative AI has been central to debates about education in the same period. 

In this context HE institutions, academic bodies, technology providers and advocates, in both Ghana and the UK, and internationally, have moved to look at approaches, principles and scenarios, to shape academic practice in a post generative AI context. They have also looked at new user interfaces, tools and technologies, exploring innovation and use through both boosterish and promotional discourses, and highly critical and resistant ones. This utopian/dystopian dynamic characterises discussions, imaginaries and implementation of new technologies and has been explored in the social studies of science literature. This is usually tempered through the language of challenges and opportunities. 

In this broader context, we compare the conversations, policies and technologies in two different university contexts: Ghana and Sussex. The University of Ghana and the University of Sussex have a long history of connection, and an important strategic partnership. This has historically been focused on research collaborations in the sciences, education and international development. Through 2023-2025 this also included a project through CEGENSA (led by Professor Deborah Atobrah) at the University of Ghana, and the gender equality workstream in the EDI unit at the University of Sussex (led by Professor Sarah Guthrie). This project established a reciprocal cross-cultural mentoring scheme across the two universities, which is the context in which these reflections about AI have been developed. 

More broadly Sussex and Ghana have had strong education relationships nationally, through students and alumni, and regional connections including the Fiankoma project in the 1990s (Pryor, 2008). This was another cross-cultural project which aimed to link the community of Fiankoma in Ghana with people and educational institutions in Sussex (in the UK) through digital technologies. Educators and students in both settings produced accounts of their lives using digital media that were turned into a web site for cultural exchange and development education. 

The current Ghana-Sussex cross cultural, reciprocal mentoring project paired colleagues across both institutions, from different disciplines, and across different roles. When we first met, for the authors of this paper, we were coming from different disciplinary backgrounds. Otoo is a social psychologist in the Department of Distance Education and O’Riordan is a digital media scholar, from media and communications, and currently based in the Vice Chancellor’s Office. Our respective institutions were in very different places in relation to debates about generative AI, and at the same time there were very strong common themes. We outline and reflect on these differences and similarities here. 

Post pandemic

The COVID-19 context informs what has happened with generative AI and this played out differently in relation to the two institutions in relation to educational delivery. At the University of Ghana, there was already a very strong online teaching offer (ODL) for the discipline of education. This meant that staff, and students were already used to using a learning management system (LMS) called Sakai as the educational context, and had been using this for over a decade. ODL at the University of Ghana is largely intended for Ghanian students, and aligned with teaching methods in the on campus offer in many ways. The Sakai support unit was based in their department. However, for the on campus offer in the rest of the university there was no use of a LMS, and there was very little experience of this. In the context of the pandemic and the shift to remote education for everyone, the rest of the university started using the same platform that had already been in use, but previously only for one area. As a consequence of this shift, Sakai became the university platform and the support unit is now based in the main campus Balm Library. 

At the University of Sussex, conversely, a LMS called Canvas was already used throughout the university to support the on campus offer. There was also a pre-pandemic online distance learning (ODL) offer as part of the overall educational offer, which also used Canvas. ODL at Sussex is very distinct from the in-person campus offer, and is delivered to students in other parts of the world and is largely nonsynchronous. However, the on campus offer, which was the main educational offer, was already mediated through Canvas as standard, and LNS use had been in play for decades at this point. In the pandemic the same platform that everyone was already using, was used more intensely, and in a remote mode for all teaching. This was often a synchronous offer that replaced timetabled teaching with online sessions. Although there was some unevenness of experience and expertise, and additional platforms were also brought to bear, this story was more about intensification of the use of the LMS rather than a wholesale change of practice. 

Therefore, there was a contrast between the Sussex and Ghana experiences in the pandemic technology adoption for education. At the University of Ghana, a smaller group of staff and students with specific expertise saw wholesale adoption of the previously ODL-only platform by the rest of the staff. At the University of Sussex, there was wholesale intensification of an existing platform and set of technologies that had already been in widespread use across the offer. 

Building on the contrast between the wholesale adoption of the Sakai platform by the University of Ghana and the intensification of existing platforms at the University of Sussex, there was also an added implication for the colleagues at the University of Ghana, with existing expertise in use of the Sakai platform.

The distinction is that at the University of Ghana, the rapid, wholesale adoption of the platform placed an immense and sudden workload on the smaller group of staff and ITprofessionals with the existing expertise. They instantly transitioned from being expert users and platform managers to becoming the institution’s primary, often overburdened, trainers,first-line support staff, and technical consultants for the entire faculty and student body. Their expertise became a critical, non-negotiable bottleneck to the institution’s operational continuity.

Conversely, at the University of Sussex, where technologies were already in widespread use, the intensification meant (at least in theory) the existing experts could focus more on scaling, optimizing, andproviding advanced pedagogical support, rather than on fundamental adoption and crisis-level initial training. However, in practice, the pandemic revealed varying levels of expertise with the existing system, despite its ubiquity, and there was also an uneven impact, with some patterns of over burdening and bottlenecks of expertise.

In 2024 the University of Ghana celebrated the 10th Anniversary of the Sakai Learning Management System (LMS) with a two-day blended conference. Professor Yaw Oheneba-Sakyi pioneered the introduction of Sakai at the university. This Learning Management System proved to be a lifeline for the entire university community—including teaching staff and students—during the COVID-19 era, leading to its wholesale adoption by everyone.

One common theme ran through the submissions and speeches of presenters at the conference which was that the University should embrace and integrate new technologies in its teaching and learning. Prof. Ohene Sayki detailed the system’s success in advancing the University’s academic mission, specifically citing the establishment of the MA in Distance Education and E-Learning, its successful integration into PhD teaching, and the global leadership roles attained by its alumni.  He outlined a strategic path forward focused on expanding mobile access, the responsible adoption of artificial intelligence and deepening in collaboration.

At the University of Sussex, Canvas was introduced as a VLE in 2018, replacing a previous system (StudyDirect). It was introduced and supported by the Technology Enhanced Learning group. Support for Canvas is delivered by the Educational Enhancement team and is reviewed in an ongoing way, with rolling workshops and resources. There are templates for use, and resources for good practice. When it is used for nonsynchronous ODL, there is a different approach to the creation of interactive learning materials, supported by a partner specialist (currently Boundless). Good practice in the use of Canvas is celebrated in teaching and learning workshops and conferences at Sussex, but the platform itself hasn’t been the focus of a university conference. 

Post AI (student and staff discussions and practices after AI)

At the University of Ghana, in 2024 the university updated its approach to plagiarism by including the use of AI in its approach to misconduct. However, there has also been a paradigm shift in how AI is viewed in recent times. It has shifted from the perspective of AI as a tool for cheating to that of a tool used in the efficiency of knowledge production and an assistance to both staff and students in teaching and learning pedagogy. A recent study with 17 PhD students in the College of Education of the University of Ghana was conducted to explore the use of generative AI tools in their work. The students explored several Generative AI tools in their various capacities.

Findings from this demonstrate that students used a variety of GenAI tools across their academic tasks:

  • For brainstorming and generating initial written content. ChatGPT and Copilot were adopted.  
  • For language improvement and editing academic writing, Grammarly and QuillBot come in handy and for clearer visual aids and presentations, the DALL.E was used.

The project also explored the pros and cons of using these Gen. AI tools in academic writing.  There were positive impacts that translated into increased academic confidence, improved time efficiency, fostered self-directed learning and improved digital literacy.  On the other hand there are emerging concerns about metacognitive laziness, diminished qualitative interpretation and risks to intellectual agency and overreliance and “Al-holic” phenomenon. The study concluded that GenAI can empower experiential learning only when integrated with pedagogical intent and ethical awareness.

At the University of Ghana, AI use has become normalised and is used in everyday operations. For example, the use of AI notetaker in remote meetings, and the use of AI add-ons in the LMS and other IT systems. However, the University has yet to adopt formal principles or policy beyond the updated plagiarism policy. At the same time, the University is a centre for research into AI in schools, and education more broadly. For example, the College of Education held a virtual panel discussion centred around the theme, “Generative AI: African Perspectives on its Challenges and Prospects.” at the 2024 Day of Scientific Renaissance of Africa (DSRA). 

At the University of Sussex students and staff engaged with generative AI, but there was (and continues to be) a lot of uncertainty about the legitimacy of using AI. In the UK HE sector, a number of AI surveys, reports and policy notes, have been (and continue to be) produced, through actors such as the Higher Education Policy Institute (HEPI) and the Joint Information Systems Committee (JISC). A group of universities in the UK developed the Russell Group Principles and many institutions have signed up to these.

At the University of Sussex, a Community of Practice (CoP AI) was initiated by the Educational Enhancement team (EE). The first change to policy and guidance at Sussex was the amendment of academic misconduct to include generative AI in concepts of plagiarism, personation and fabrication (existing integrity concepts). Alongside this, guidance for staff and students, and a number of case studies were shared. In 2024 there was a university-wide engagement and an AI summit that culminated in the development of a set of institutional principles.

At Sussex, during the university-wide engagement, concerns and ideas were raised across integrity, the meaning of knowledge, automation, legitimacy and environmental impact. The principles themselves focused on seven key areas to:

  • build on Sussex’s world leading research in AI by investing in ongoing, interdisciplinary research on AI in education
  • develop strong digital capability and critical AI literacies for our students and staff
  • deepen ethical standards
  • protect our academic integrity and student experience
  • foreground accessibility and inclusion in our approach to the use of AI in education
  • safeguard our community against malicious or illegal use of AI
  • commit to clearly communicated and transparent governance

The last point, about transparency, has already become very difficult to deliver on, because the underpinning systems across the university now, increasingly, have AI functionality built in. This is not always visible, and software updates are included as an automatic default. For example, the lecture capture platform had an automated generative AI captioning function built into the latest upgrade. Our capacity as an institution to clearly communicate how and where AI is already in our systems, and to be transparent about the governance of this, is already significantly limited by the technical design.

Both universities are continuing to build up resources, guidance and training, both in-house and with external partners.

Conclusion

At the time of writing, the University of Sussex continues to build on the AI principles. This includes staff training, and resources, and continuing to identify opportunities to support staff and students to develop capability and critical literacies. There has been a return to in-person, invigilated exams, and many subject areas are rethinking their approach to assessment post-AI. This includes pilots for proctoring software and lockdown browsers. Responding to the challenges and opportunities through these technological disruptions is work that is ongoing and has to be made and remade. The AI landscape continues to change rapidly, and understanding, and access is uneven across the University. A broader set of principles that apply to the operational and research dimensions of the University is also necessary and is being led through the Digital and Data Task Force as part of the strategy, Sussex 2035

The University of Ghana is developing an AI policy, and is actively researching the impact of AI in education. For example, Dr Freda Osei Sefa from the College of Education is leading research on the use of AI in basic (primary and lower secondary) schools. Recently, the Kwame Nkrumah University of Science and Technology (KNUST) has introduced a compulsory module for all students, and this is in the pipeline for the University of Ghana. This is largely focused on careers and future job prospects. A concern shared at the University of Sussex. 

It is clear that generative AI particularly, and AI more generally, is an important feature globally and this plays out in both universities in different ways but with very strong common themes.     

References

Acquah, R (2024) ‘University of Ghana revises plagiarism policy to include AI’

https://www.myjoyonline.com/university-of-ghana-revises-plagiarism-policy-to-include-ai/ Source: Raymond Acquah, 26 February 2024 10:43am

Pryor, J (2008) Analysing a Rural Community’s Reception of ICT in Ghana, in Van Slyke, C (ed) Information Communication Technologies: Concepts, Methodologies, Tools, and Applications

Posted in Blog

Building students’ confidence to speak through interactive law teaching

A person wearing glasses, and with long, wavy, grey-and-brown hair is shown from the shoulders up, wearing a dark T shirt, against a plain background.
Fiona Clements (Law)

Fiona Clements is an Assistant Professor in LPS and teaches Equity and Trusts and Law of Succession to UG and MA Law students.  She is the academic lead for the Wills, Trusts and Estates clinic, offering pro-bono advice to the public on inheritance, trust and probate issues.

I think all students can do well in assessments if we give them the best opportunity to prepare for the type of assessment

1. What led you to use whiteboard exercises for teaching intestacy law?

If you’re thinking about the distribution of property on death, it’s useful to know what the shape of the family looks like. And drawing a family tree – and this happens even in practice – it’s not just an academic exercise. Drawing a family tree is really important because then you can see who the potential beneficiaries might be. It was a natural progression when we were thinking about the distribution of property to think about what the family tree might look like and have a go at drawing it out. And students didn’t bring with them a lot of experience of drawing a family tree. They might have done a family tree when they were at primary school and learning about the Tudors or whatever, but they hadn’t really had much opportunity or much need to draw one, and that’s really how it started.

2. How do you adapt your approach for students with different confidence levels?

I scaffold all my teaching, so I always come in with easy questions that get progressively harder and I always encourage the students who I know are less confident to answer a question early on before they start getting harder. I think I have a slight advantage in teaching Law of Succession because it’s a final year module. And the students, they know me because I taught them a core earlier down in their learning. And we build up a really nice group dynamic because we tend to be smaller groups and they’ve all chosen it. I think there are things that I do that help support students who lack confidence.

A lot of students support each other in a way that perhaps they might not earlier on in their education because they don’t know each other so well. But we have an opportunity to really become a cohesive group. And the students root for each other and say ‘oh, go on, you can do it.’ And if they’re writing something on the whiteboard and they miss something out, someone will say, ‘oh, have you thought about this, that and the other?’ in a non-confrontational, non-critical way.

3. What role do anxiety and mental health play in classroom participation?

That’s a really good question.  Before we even get to the classroom participation, I would say that with mental health and anxiety issues the biggest barrier that I see them creating is actually stopping the students from coming to the sessions, because I think once the students have come into the session, there’s plenty that we can do to keep them engaged and feeling confident and well supported so that they’ll come back again. But I think it’s the fear of what might be going to happen and is a very real sort of barrier.

Also, I think it’s helpful for us to have an understanding of what mental health and anxiety issues students might be experiencing, because to have an understanding helps us to increase our empathy and our ways differently, maybe choosing different language to that’s a little bit more inclusive or more encouraging.

4. Why do confident students struggle with formal oracy assessments?

I think all students can do well in assessments if we give them the best opportunity to prepare for the type of assessment. Oracy assessments are assessments that they’ve not necessarily had much experience of taking before. I imagine students who’ve

done a language might have had an oral as part of their language GCSE or whatever. And if languages aren’t their thing, then that may or may not have been a happy experience for them.

There’s very little – even with lawyers who as a job, particularly the barristers, who are going to be standing on their feet and talking – I don’t think there are many opportunities for us to help the students to prepare for that sort of assessment. But I do believe that if we give students the best opportunity we can to prepare for that type of assessment, I don’t think there’s any reason why the confident and the less confident students

shouldn’t be able to do well. I also think it’s the way that we frame the type of assessment for them. Helping them to understand that we want them to do well and we’re giving them the best opportunity to do that.

An oral assessment that’s not a presentation gives them a really good opportunity to really explain and demonstrate what they do know in their own words because with an oral assessment, that’s more of a chat where the person who they’re talking to can give them prompts if they get stuck. This could be a really nice form of assessment for them to use and also a bit different to just having to write essays.

5. How do you gauge the success of interactive teaching methods?

Feedback from the students. Verbal feedback from the students, attendance of the seminars, they talk with their feet. If they’re enjoying it, then they come along and they want to get involved. Having a large number of students actively involved in seminars, I always think is a measure of success because they’re feeling confident enough to want to get involved and want to chat things through.

We also do a formative assessment in week 4 and week 9, so I can see how they’re getting on in terms of preparing for the exam and know that they’re on track with their learning. But mostly feedback from the students because they say they enjoy it. And the module recruits well. And I think that’s because students who’ve taken the module tell students in the year below that it was helpful and they enjoyed it and I like to think that’s the case.

6. Have you got any tips or advice for people who want to use this teaching method?

We know that the students tend to feel anxious when we put them on the spot and ask them a question about their prepared learning, so anything that gives them an opportunity to get involved in a less threatening way has to be good, whether it’s writing on the whiteboard as a way of not having to look at their peers, or using Lego. or using mini whiteboards and then holding up their answers, anything that takes away that pressure of putting a person on the spot, which is what students tell us they find so disconcerting. And it’s the prospect of being made to look foolish, I suppose, in front of their peers that discourages them from wanting to take part. And I think writing on the board works in my area.

But anything, any method that takes some of that pressure away is beneficial for students.  If they keep coming to the sessions, then that feels as though it’s a win before anything else happens.

Posted in Case Studies

Call for Participation- Sussex Education Festival 2026


Two people stand in conversation at the 2025 Sussex Education Festival, in front of a research poster titled “Understanding the student experience of Black, Asian and Minority Ethnic students on campus.”

It’s that time of year again!  

On Friday 8 May 2026, The Sussex Education Festival will return for its fourth year. The Festival provides a supportive and collaborative space to celebrate and share our experiences, research and reflections on teaching, learning and assessment here at Sussex.  

We encourage all colleagues involved in education (in any capacity), to consider submitting a proposal. We also welcome co-presenting with students and have some student participation vouchers available.  

This year’s festival has three themes and a variety of ways you can participate. 

Themes 

The three themes for this year’s festival can be interpreted broadly. We’ve suggested some topics below to get you started, but please see these as suggestions rather than limitations: 

Education for Progressive Futures 

  • Interdisciplinary teaching and learning 
  • Reimagining the ways we teach in changing educational landscapes 
  • Equipping students to make a difference 
  • Lifelong learning, employability and citizenship 
  • Digital literacy and skills 

Impacting Student Experience 

  • Building engaged learning communities 
  • Encouraging and listening to student voice 
  • Belonging and community building for diverse student populations 
  • Student mental health and wellbeing  
  • Designing and implementing impactful scholarship projects 

Transforming Assessment at Sussex  

  • Authentic assessment and assessment for learning 
  • Involving students in curriculum and assessment design 
  • Responding to, and evaluating generative AI 
  • Inclusive and accessible assessment for all learners 
  • Supporting students to develop agency in assessments and feedback 

Presentation formats  

There are two presentation formats to choose from: 

1: Work-in-progress lightning talks (7 minutes) providing short reflections on current practice or projects. 

2: Longer presentations reflecting on outcomes of pedagogic developments, scholarship or research in your chosen area (15 minutes).  

No plans to present? Keep an eye on Learning Matters for details of our innovation showcase- there may be other ways for you to get involved in Sussex Education Festival 2026.  

How to submit your proposal 

Please submit your proposal to our Call for Participation form.  

The deadline for submitting is 17:00 on Friday 27th February. 

You should receive a response from the Education Festival Steering Group by 20th March. 

If we haven’t convinced you to submit a proposal yet, here is some feedback from presenters in previous years: 

‘I wanted to write to say what a brilliant day I had on Friday! So many of the talks were inspiring, and the general atmosphere was great all day.’ 

‘It was a really rich and stimulating day of ideas and fascinating discussions. It has helped to spur several ideas of things to try to improve my own teaching.’ 

I just wanted to say how much I enjoyed the whole day – such a rich and fabulous programme, sparking so much fascinating discussion… and helped to launch a few new collaborations and friendships with colleagues across Sussex.’ 

Posted in Events

How can a process approach to assessment help address the impacts of AI?

by Dr Sarah Watson and Kamila Bateman, Academic Developers in Educational Enhancement

A process approach in teaching is not a new concept. It was first introduced by Stenhouse in 1975 as an alternative to the product model. It concentrates on teacher activities, learner activities and the conditions in which learning takes place. In focusing on the nature of learning experiences, rather than specific learning outcomes, the process model emphasises means rather than ends.

Although it predates the age of artificial intelligence, can Stenhouse’s approach offer a fresh perspective on AI in education?

This post highlights how academics at the University of Sussex and beyond are adopting process-oriented approaches to assessment, not only to reduce over-reliance on AI, but, more importantly, to strengthen the pedagogical purpose and value of assessment for students.

Shifting from ‘product’ to ‘process’

Shifting the focus from product to process allows us to foreground thinking, development, and learning. It also provides room to explore how AI might support students during this process, rather than replace it.

One effective approach is embedding writing practice directly into teaching. At Sussex Law School, Verona Ní Drisceoil (2023) describes dedicating ten minutes of each seminar to structured writing activities. This helped students develop their writing skills incrementally and prepare for end-of-term essays. Student feedback was overwhelmingly positive, with many reporting that the practice demystified academic writing.

Teaching on the foundation year at the University of Sussex, Sue Robbins (2023) similarly argues that when students understand the writing process, the perceived threat of generative AI diminishes significantly. Embedding academic skills into teaching therefore acts as a powerful deterrent to academic misconduct. As Robbins notes, our choices in response to AI are to avoid it, outrun it, or adapt to it. Given the rapid development of generative AI, we have a responsibility to support students in learning how to use these tools responsibly, both during their studies and beyond.

Another productive strategy is encouraging students to treat AI as a writing coach rather than a content generator. Alicja Syska (2025) suggests using AI as a tutor that prompts critical thinking and supports students in producing their best work, without doing the work for them. She advocates collaborative writing in the classroom and rethinking assessment criteria to emphasise original thinking, writing development, and opportunities for peer review.

Fostering engagement with the learning and assessment process

One approach to help students engage with process is to design marking rubrics that explicitly reward idea development and provide opportunities for peer review. Bianca A. Simonsmeier et al (2020) note that such approaches support active, self-directed learning and encourage social interaction and reciprocal teaching, whether through online discussion forums or structured peer assessment.

We can also diversify assessment formats beyond the traditional essay. Introducing reflective components, small-group critical evaluation, collaborative planning, or playful elements can increase engagement and ownership. Denise Wilkinson (2024) suggests using “flipped assignment” techniques, interactive engagement tasks, and collaborative reflection to help students feel more invested in their work. This emphasis on ownership is echoed by Helen Foster (2024), who highlights the role of formative assessment in supporting self-regulated learning and creating more inclusive learning environments.

Another opportunity lies in building on what students already know about AI and how they use it. Tim Requarth (2025) advocates assignment-specific guidance that supports a balanced approach to AI use, neither punitive nor overly permissive.

In the Economics department at the University of Sussex, Gabriella Cagliesi (2025) and Carol Alexander (2025) have taken this further by developing customised ChatGPT tools that store module content and are fully integrated into the learning process. These tools function as trusted study aids, enabling students to engage critically with course material.

Cagliesi and Alexander found that their custom GPTs allowed students to explore, question, and critique content outside of class, creating more space during teaching sessions for relationship-building, personalised support, and meaningful discussion.

The rise of generative AI reinforces something we often overlook: Meaningful learning happens through human connection and collaboration. As Syska suggests, thoughtfully integrating AI may allow us to reclaim time and space for deeper engagement with learning and for valuing what we bring as human educators and learners in a digitally dominant world.

Posted in Blog

The Scholarship Journey: Cultivating Scholarship Through Growth and Connection during Scholarship Leave

A woman (Jo Tregenza) with shoulder-length brown hair wearing a black-and-white checkered shirt, standing in front of a light wooden panel background
A woman (Jo Tregenza) with shoulder-length brown hair wearing a black-and-white checkered shirt, standing in front of a light wooden panel background

Jo Tregenza is a Reader in Primary Education at the University of Sussex, and Senior Leader with a demonstrated history of working in the higher education industry. Skilled in, Primary Education, Teaching English, E-Learning, Lecturing, Teaching and Higher Education. Senior Fellow of the Higher Education Association. Founding Fellow of the Chartered College of Teachers. President of the United Kingdom Literacy Association (UKLA).  She received several teaching awards, including 2018 Innovative Teaching Award for the whole ITE team, in 2015: USSU Teaching Award 2016 and Outstanding and Innovative Postgraduate teaching award.

What inspired you to frame scholarship as a journey and use the growth metaphor throughout the presentation?

So I just can’t get away from using plants and nature. I think partly because something that’s really in my mind is the organic process. So it’s not never terribly strategic about it. It sort of grows and develops. I first became a teacher because I wanted to teach children how to see daffodils and it’s that sort of theme has threaded through.

So then when I started teaching students, I gave them a seed or a bulb every year, and I’ve planted one for every one of the students I’ve ever trained. I’ve got a lot of bulbs in the garden now – it’s very pretty in spring.

And this just keeps coming back. The model I’m coming to with my PhD is of teaching reading, which is coming from a tree approach where it starts with the roots and soil and grows and so I just think it’s just how my brain works.

And I want it to see that it’s a cycle almost. It keeps going, there’s that cyclical approach to it, but its also the nourishing, I think.  The soil, the sun, the water, everything that comes in to nourish it.

How did you decide which personal experiences to include and what impact do you hope sharing them will have on your audience?

I was trying to aim at the people that I thought maybe were on scholarship and thought they would never ever get promotion or anything. And so showing that you can, because I mean, as I say, I’ve got to this stage without a PhD. Hopefully, I’ll have one soon, but, I wanted to make people feel that there was a root within the university. It might take a long time and in my case it has been a very long time but I think the things that have happened more recently are making quite big substantial differences.

So you can see this sort of slow journey in what I put, and the difference, what the key thing was is putting the policy that we had access to study leave, and that’s a massive change for us. But I’m well aware it’s not going to be consistent across the university and given we’re making faculties, the hope that we could have that sort of thing consistently for everybody would be really good. So it was like a key message.

What impact has this had on student learning, curriculum design, or academic practice more broadly?

So it turned out that we were changing all our modules anyway. My English was merging with maths. So I wasn’t going to rewrite everything because I thought, oh, I’ll just merge it with maths, that’d be fine, but because of the work I’d done and the research I’ve been looking at in reading particularly and the connections I made I ended up completely rewriting everything which I’m regretting today because it’s an awful lot of work to do.

I think I was always aware of what was going on outside, but now I’m deeply involved with what’s going on in the curriculum, in and other pockets, so I’ve become very involved because of my study leave, the anti racism work, I’ve become very involved in play. I became very involved in AI. So all of those I’ve embedded completely into what we’ve been teaching and the disadvantage bit has become a theme right through the modules as well. So it has been completely embedded in everything, so it has affected curriculum design on what we’re teaching the students and they’re loving it.

They’re buzzing which is really nice. You know, I keep meeting them and they want me to teach them every day and it’s lovely so they’re really liking that it’s very current but it’s also very embedded in practice.  

And in terms of academics, more practice, more broadly, That’s where the networks come in because I’ve been feeding all of that into a network of special interest groups of IT providers across the country. So I’ve already led one session with them, and because of the connections I made, I’m bringing in speakers. So I’m trying to affect, for example, the way universities teach writing to primary schools, and I’m doing that by bringing people in from Ireland and just trying to shape things a little bit.

You emphasized collaboration networks. What strategies work best for building those connections during your scholarship leave?

So LinkedIn was life changing for me, and I had it and didn’t really use it. I thought it was just a bit of a pointless thing.  My daughter started nagging me saying, mum, you have to change your LinkedIn profile, and she started to lecture me on how to use it and I had to listen in the end.  And it really did change it, which meant she was right, very annoying.  

I’d started to really build those connections far more succinctly, and LinkedIn led to all the connections in play, all the connections with anti racism, all of that came from that. I’ve now met with the CEO of First News, in fact, this week as a result of that. So all of those have really built.

I had the advantage of having a few very strong networks anyway so, the UK, but I was able to put more time into that so I set aside time in my study leave to plan the conference. I couldn’t have done that without having study leave. I did it the previous year without study leave but I ran the conference here and that was manageable but not in Liverpool. So yeah LinkedIn was the main thing and then just nurturing those connections was probably the most important.

Looking back, which part of your plan was most challenging to achieve, and what would you do differently next time?

Oh, you know it. It’s the National Teaching Fellowship, I still haven’t done it. I’d arranged with Claire from the medical department, who’s going to help me and have a look at it and I’ve got everything arranged, but I just didn’t have time, and that’s so I set myself too many targets in that respect, and something had to give, and it was that.

It’s a huge amount of work and that I suppose is I what I would have done differently, I would have blocked maybe a month of time that was purely for that. I think that’s the way to do it, is block the time, but something else would have had to give, and it would have been the anti racism stuff because that sort of grew and I knew that was an indulgent passion in a way, but I felt that it really needed to be done. And I it’s paid off because that is now going to be something national, so it’s important. We have to do it. It’s organic as I say, we’re going to kind of a branch for the moment.

What do you see as the long-term impact of your scholarship work—on your own career, your institution, and the wider academic community?

It would be nice to be able to apply for professorship, I do feel I’ve got more than enough evidence for that now so I’m hoping that that would work but what it will do, it’s the wider thing for me, is that I’ve already been invited to Dublin as a guest to their Celebration conference. and Norway and Slovenia, and they want me to present my work.

The work I’m doing with my PhD is going to be significant and I know it is even though it’s only three schools  It’s challenging what’s happening in the world at the moment. Everyone’s saying there’s a problem with reading, everyone’s saying we’ve got so many disadvantaged children and our government’s answer is more phonics tests and what I’ve got is challenging that and so I’m really poised.

Hopefully now I’ve got enough that I can get small papers out and I’m able to so I’m going to present in Cyprus hopefully in February, Slovenia and Glasgow in July, so I’m getting things out and I was able to present to the DFE a little while ago, so it pushed me ahead of my PhD timeline I because I knew I needed to swing out the policy. So yes, that’ll be the plan.

Posted in Case Studies

Conversations on teaching for community and belonging: Our blog collection

Dr Emily Danvers

Conversations around teaching rarely happen beyond formal training opportunities that often take place early on in our careers. After this, and especially during term-time, space for talking  in our busy working lives is often limited. Incidental corridor chats are seen to generate a collaborative and positive working culture and, for some, were mourned during the pandemic and its aftermath. Indeed, post-COVID (or maybe this was always the case?), we rarely have time to talk to our colleagues about anything at all. When it comes to teaching, who do we approach when things go well, or not so well? Who is talking? Who is listening? And who cares? This was the premise for our project, drawing colleagues together across the then newly formed Faculty of Social Sciences, to talk about how we facilitate conversations about teaching, and our wider working lives, to enhance a sense of community and belonging for staff.

The conversation theme was inspired by Jarvis and Clark (2020) whose work emphasises how the informality of a conversation about teaching flattens power relations and allows people to make meaning together without the intensity of an agenda or outcome. They position this work in contrast to formal teaching observations with their traces of surveillance, performance and measurement. Too often we rely on these individualised encounters to ‘develop’ teachers, where in fact, authentic conversations might more meaningfully transform teaching, where colleagues hear something, share together and be inspired by each other in the everyday. This reflects Zeldin (1998, p.14) who notes that: ‘when minds meet they don’t just exchange facts; they transform them, draw implications from them, engage in new trains of thought. A conversation doesn’t just reshuffle the cards; it creates new cards.

With the provocation to encourage transformation through talking together, Emily, Suda and Verona set up an initial launch event for all the faculty to generate some initial conversations about teaching. The questions and topics that emerged from this focused on the following questions.

  1. Why is community and belonging important for diverse academic flourishing?
  2. How and where is community and belonging created and developed?
  3. How might the labour of community and belonging work become visible, valued and rewarded?

The 12 colleagues who attended were afterwards put in cross-faculty threes and connected by email, with suggestions that they meet again to continue these conversations. As project leads, we were deliberately hands-off at this point, as the purpose of this project is to see if and how these conversations form organically.

A couple of months later, 5 of us met to blog together for the day, about our response to the questions, along with other themes that came up along the way. What we share in this blog collection is the story of our collaboration conversations.

In Jeanette and Fiona’s blog, they talk about what we can learn from student collaborations, which are often and rightly prioritised in the work of higher education.

In Suda and May’s blog they write about the value of time and space to slow down the academic pace and to generate community.

In Emily’s blog, she talks about the joy and challenges of teaching across different disciplines and how collaborations are structurally challenging.

What we learnt from this project is the ethics and timeliness of the conversation format, as a collegiate response to the complex and evolving challenges facing the sector, our students and ourselves as teachers. We all relished time to talk and think about the uncertain, the tricky, the everyday, the thorny, the unequal, the caring and uncaring practices – all of this important ‘stuff’ that sustains us as teachers but has no space in our working lives. We also did not only talk about teaching but about other collaborations that we value.

Our recommendation is that teaching (and other) collaborations should be exploratory and conversational rather than only a tool for appraisal, What we are seeking is regular open and meaningful dialogue about teaching and academic working lives that is not  ‘done to academics at the behest of institutional leaders’ but conversations  ‘with or among colleagues, characterized by mutual respect, reciprocity, and the sharing of values and practices’ (Pleschová et al, 2021:201).

References

Jarvis, J., & Clark, K. (2020). Conversations to Change Teaching. (1st ed.) (Critical Practice in Higher Education). Critical Publishing Ltd.

Zeldin. T. (1998). Conversational Leadership https://conversational-leadership.net/ [Accessed 15.07.2025}/

Pleschová, G., Roxå, T., Thomson, K. E., & Felten, P. (2021). Conversations that make meaningful change in teaching, teachers, and academic development. International Journal for Academic Development, 26(3), 201–209.

Getting the ‘social’ into Social Sciences: how can we learn from LPS student initiatives to build cross-faculty relationships?

Jeanette Ashton and Fiona Clements

The broader context

When the new faculty structure at Sussex was first mentioned, discussed further at university-wide forums and School and Department meetings, our reaction was perhaps similar to many others. Whilst ‘indifferent’ might be too strong, our thinking was that this was a decision taken at a university leadership level which probably wouldn’t change much on the ground, aside from potential pooling of resources in a challenging higher education climate. We felt that any changes we needed to make would filter down in time through our Head of Department, but that, in short, it would remain ‘business as usual’ for the Law School. The ‘Conversations on Teaching for Community and Belonging’ initiative by Emily Danvers gave us an opportunity to explore how the new faculty structure might enable us to develop supportive relationships with colleagues outside of our department, what that might look like and how it may help us navigate challenges going forward post the voluntary leavers scheme.

In the last few years of her role as Deputy Director of Student Experience for LPS, Fiona developed a number of initiatives which were well-received by the student body. In this piece, we consider how we might draw from those initiatives to develop a faculty-wide space, but with a staff rather than student focus. That is not to say that building faculty wide student relationships is not important, but that, as Ni Drisceoil (2025) discusses in her critique of what student ‘belonging’ means and who does that work, staff community and belonging often takes a backseat. We conclude with some thoughts as to how we might move forward.

Community and belonging sessions for students in LPS – what did we do and why did we do it?

For the last couple of years in LPS, we have hosted a weekly breakfast or lunch event for students. We know that very many of the students find that the peers that they share their first year accommodation with end up being some of their closest friends – both at university and beyond. Friendships are also forged at departmental level but, in addition, we wanted to give the students an opportunity to come together, informally, and meet students from other departments in the school.

The get together would happen in the same room, the student common room, at the same time each week. As the Law school runs a two week timetable, it meant that different Law students would be available to attend, depending on whether it was an even week or an odd week. There was a small group of students from each department who would come every week, but we also had new faces at every get together – students who had heard about the event, or students who just happened to be in the common room when the event was happening.

When we asked the students about their motivation for attending, they gave a variety of different reasons. It was interesting to note that some of the students came to the event with the intention of seeking advice (perhaps about managing workload, or how to tackle their reading etc). The students found that having a casual conversation with a member of faculty whilst sharing some food was a preferable option to pursuing the more formal route of booking an office hour with an academic advisor whom they may know less well.

LPS Staff events: what can we learn?

In thinking about faculty-wide initiatives, it’s important to consider what is already happening in schools and departments and what we might learn from that. In LPS, we have online school forums, which are well-attended by both academic and professional services staff and a useful way to catch up on what’s happening at School level. In terms of more socially oriented events, we have regular ‘Coffee and Cake’ sessions, which are not well attended. Without undertaking a survey, we can’t provide reasons for this, but it may be that this being a Head of School initiative and booked into our calendars, gives the impression that this is a space where we might be able to socialise, but not share concerns and ask ‘silly’ questions. Time is of course also a factor, with events such as these falling down the priority list as we juggle competing responsibilities. An open plan office space for professional services staff is perhaps more conducive to those conversations than the academic offices, so it could be that a faculty wide space for academic staff would provide an opportunity to have those conversations, with the benefit of perspectives as to what happens elsewhere.

What might be possible in the new faculty? Some ideas:

· Twice monthly scheduled spaces at different times on different days, and starting on the half hour, to maximise faculty availability

· A small cross-faculty team to rotate the ‘host’ role and spread the word in the different departments. As discussed above, a friendly facilitator was pivotal to the success of the LPS student initiatives

· Keep organisation minimal, and be clear that this is not a leadership initiative

· Clear comms on the purpose of the space: to drop in, meet people, share ideas and concerns, ask questions in an informal space without needing to schedule a meeting

· No need for food! No need for themes!

· Don’t be discouraged if no one turns up. These things take time.

References and further reading

Ní Drisceoil, V. (2025): Critiquing commitments to community and belonging in today’s law school: who does the labour?, The Law Teacher, DOI: 10.1080/03069400.2025.2492444

For more on community and belonging for students see Moore, I. and Ní Drisceoil, V. ‘Wellbeing and transition to law school: the complexities of confidence, community, and belonging’ in Jones, E.  and Strevens, C. (Eds.) Wellbeing and Transitions in Law: Legal Education and the Legal Profession (Palgrave Macmillan 2023).

Slowing Down the Hamster Wheel: Space to Reflect and Create Communities.

May Nasrawy and Suda Perera

A cartoon hamster runs on a wheel labeled with academic pressures like “Publish,” “Teach,” and “Grants,” surrounded by the word “STRESS!” and thought bubbles promoting reflection and community.

Since we’ve been teaching at Sussex, it’s felt like we’ve been in a state of perma-crisis: Strikespandemics, financial losses, have all contributed to a sense that we are on the brink of imminent disaster and we need to react quickly to avert an impending collapse. In this context a lot of pressure is put on as individual academics to do more and more with less and less. We need to teach more students and give them extra support even though there are fewer resources. We need to bring in more grants even though funding sources are shrinking, and publish more research in an increasingly narrowing field of “world leading” journals. Failure to do this, we are told, is an existential threat to the University and could result in more job losses, including our own. This sense of existential dread has meant that many of us feel like hamsters in a wheel – desperately scrambling from one task to the next in an attempt to just keep going and hope that eventually things will calm down. But the calm never seems to come, and in this highly individualised and reactionary wheel of toxic productivity, we seem to have lost a sense of community and belonging. In this blog we consider: Where in this endless cycle of work and crises is there space to think and reflect on why we’re doing this both as individuals and as a community? How can we break the vicious cycle of individualism and reaction and instead foster an environment where there is space to think and reflect in a collective and collaborative way to build the kind of University that we want?

By participating in the Conversations for Teaching for Community and Belonging, we have come to realise that there is a community of like-minded staff members who feel similarly , and that the answer to these questions begins with time and space. Time to step away from the hamster wheel of toxic productivity. Space to reflect on our individual identities and sense of purpose. Space to support and be supported by our colleagues. And from that space to foster a wider sense of community and belonging. This space requires us to have protected and meaningful time to just think and discuss with each other these bigger-picture and wider issues, which are not easily captured in bureaucratic processes. So much of the day-to-day running of the university relies on labours of caring and collegiality, and yet so much of this labour is hidden and not celebrated or even spoken about. We don’t want these spaces to be just one-off lip-service events or individualised awards, but rather collective spaces to talk through issues and share experiences with no expectation of a measurable output. By setting aside time for reflection, we argue we can move away from these feeling of constant reaction to immediate crises.

In the short time that we’ve had to engage in conversations with one another in this small project, we have been able to learn about what colleagues across faculties are doing in their teaching and research, and also share experiences that are point to issues of both concern and hope. We have been able to foster a sense of openness precisely because there is no sense that we are in competition for some sort of reward at the end, or that we have to produce something to demonstrate “value for money”. While we appreciate that much of what we do on the hamster-wheel of productivity is part of the job, we argue that it shouldn’t take up all the space, and should not be moving us away from other essential elements of our practice that require us to slow down to reflect, learn, collaborate, feel, care, read, and think.

Cross-faculty teaching: favour culture vs collaboration

Emily Danvers

Across the faculty of social sciences, many of us share research interests, professional expertise and academic knowledge that shape the topics we teach. When we met to collaborate on this project, for example, we found most of us teach about issues related to education, social justice, and globalisation. Yet we rarely teach outside the confines of our disciplines and departments. And where we do, it is through favours and friendships, rather than anything structurally organised. Our compartmentalised teaching arrangements often produce a culture that can work against collaboration.

A couple of years ago, Jeanette and I met through a shared interest in education for those of Gypsy, Roma and Traveller heritages. This was an area I research, and Jeanette had just started a community legal education initiative, Street Law, in partnership with the community organisation Friends, Families and Travellers. She asked me whether I’d talk to her students about teaching. Of course, I’d love to. I liked her. I liked the project. It was my area of expertise. Why not?

I’ve since done this a couple of times and get a huge amount from teaching Law students who I normally would never get to meet. Thinking about their contexts, disciplines and experiences and translating my pedagogical knowledge to them is also a useful exercise in understanding what and why I prioritise as an educator.  But it isn’t in my workload. I don’t have to and am not, directly ‘rewarded’, in the very narrow sense of my own time.  Jeanette confesses in our collaboration project that she feels guilty about asking me. But we work in the same university and now in the same faculty. Why shouldn’t we teach across these artificial academic boundaries?

This raises questions about how much of academic life might be sustained by these sorts of favours. It reminds me of the complicated emotions of gift-giving, where the receiver bestows something with surrounding norms of exchange and appreciation. On the one hand, forging positive relationships and having reciprocal practices of care are important ways to navigate academic work and its pressures (Frossard and Jeursen, 2019). Doing this cross-faculty teaching was joyful and enriching – a ‘gift’ to me as well as Jeanette. Also, an academy where we only did what was in our job description would surely fall apart!

But, on the other hand, these practices lead to under-recognition of labour or overwork. Academia has long been organised into silos – whether departments or modules – producing a sort of bento-box style organisation rather than a rich, interdisciplinary tasty stew. It is only when trying to foster collaboration through teaching across departments that we notice how the structures and cultures produce or preclude the kinds of interdisciplinary work we may find personally enriching.

In reflecting on this experience, what becomes clear is the tension between the joy and enrichment of interdisciplinary collaboration and the structural barriers that make such collaboration exceptional rather than expected. While cross-faculty teaching can feel like a ‘gift’—personally fulfilling and intellectually stimulating—it also reveals the fragility of a system that relies on goodwill rather than institutional support. When collaboration is sustained through favours rather than formal recognition, it risks becoming invisible labour, disproportionately carried by those with the capacity or inclination to give more than is required. If we want to foster truly interdisciplinary, socially engaged teaching that reflects our shared academic interests and values, we need to rethink how work is recognised, rewarded, and organised. Moving beyond the bento-box model of academic life will mean embracing new structures that not only encourage, but also sustain, collaboration across boundaries.

Frossard, C., & Jeursen, T. (2019). Friends and Favours: Friendship as Care at the ‘More-Than-Neoliberal’ University. Etnofoor, 31(1), 113–126. https://www.jstor.org/stable/26727103

Tagged with:
Posted in Uncategorised

About this blog

Learning Matters provides a space for multiple and diverse forms of writing about teaching and learning at Sussex. We welcome contributions from staff as well as external collaborators. All submissions are assigned to a reviewer who will get in touch to discuss next steps. Find out more on our About page.

Please note that blog posts reflect the information and perspectives at the time of publication.

Subscribe via Email

Enter your email address to receive notifications of new posts.