Episode 13: Generative AI and Higher Education

Our guests: Dr Paul Robert Gilbert and Dr Jacqueline De Beaudrap, with producer Simon Overton and hosts Dr Heather Taylor and Prof Wendy Garnham.

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our thirteenth episode is ‘generative AI and higher education’ and we hear from Dr Paul Robert Gilbert (Reader in Development, Justice and Inequality) and Dr Jacqueline De Beaudrap (Assistant Professor in Theoretical Computer Science). 

Recording

Listen to episode 13 on Spotify

Transcript

Wendy Garnham: Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Generative AI and Higher Education. Our guests are Dr Paul Gilbert, Reader in Development, Justice, and Inequality in Anthropology and Dr Jacqueline De Beaudrap, Assistant Professor in Theoretical Computer Science in Informatics. Our names are Wendy Garnham and Heather Taylor and we are your presenters today. Welcome everyone. 

All:  Hello. 

Heather Taylor: Paul, could you start by telling us a little about what you teach and how you see generative AI changing the way students approach your teaching? 

Paul Gilbert: So, I’m based in Anthropology, but I actually primarily teach in International Development. And my teaching is divided into three areas, which sort of reflects my research interests. I teach a secondyear module on Critical Approaches to Development Economics. I teach a thirdyear module called Education, Justice, and Liberation, and I’m the coconvener with Will Locke of the new BA Climate Justice, Sustainability and Development. I also do some teaching from first year on Climate Justice. So, honestly, in terms of how students are approaching the teaching, apart from a couple of cases where I suspect there may have been AI involvement in some assignments, I haven’t noticed a huge difference. 

What is new, and maybe why I haven’t noticed that difference, is that, like a lot of my colleagues in International Development, we started engaging headon with AI and integrating it into what we teach about and how we talk to our students. For example, my secondyear development economics course is deliberately structured around global South perspectives on development economics and thinking that isn’t dominant. Economics is odd among the Social Sciences. There’s a lot of research on this. Sociologists like Marion Fourcade have looked at the relative closure of economics, the hierarchy of citations, its isolation from other disciplines, and the concentration of EuroAmerican scholarship. Professor Andy McKay in the Business School has also done work on the underrepresentation of African scholars in studies of African economic development, and on the tendency for scholars from the global South to be rejected disproportionately from global North journals.  And so the reason that matters for thinking about teaching and AI is because AI, so large language models. Right? ChatGPT and Claude and everything, they’re basically fancy predictive text. Right? Emily Bender calls them synthetic text extruders. Right? You put some goop in and it sprays something out that sometimes looks like sensible language. 

But it does that based on its training corpus, and its training corpus is scraped from somewhere on the Internet. And it also has various likelihood functions within the large language model that make sure the most probable next word in the sentence comes, right, so that it seems sensible. And what that does is reproduce the most probable answer to an economic question, which is the most dominant one, which happens to be not the only one, but one from a very, very narrow school of thought that has come to dominate economics and popular economics and so on. And so all the kind of, minoritised perspectives, the ones that don’t make it into these, like, extremely hierarchically structured top tier journals, the ones that aren’t produced by Euro American scholars, you’re not going to get AI answering questions about them. And if you use it to do literature searches, it’s not going to tell you about them. 

So, that module is built around those perspectives. And so we kind of integrate that into the teaching as a way to highlight to students that large language models function in a way that effectively perpetuates epistemicide. Right? By making sure that it reproduces the most likely set of ideas and the most likely set is always the dominant set, right, from whoever is most networked and most online, then perspectives that have already been disproportionately not listened to get less and less visible. So structuring the course around precisely those not so visible but really important and really consequential approaches to economic development forces students not only to not use it to do to do their literature searches, but to think about the kind of the politics of AI knowledge production. 

Heather Taylor (04:57): That’s fantastic. I mean, we have, so we both teach in psychology, and we’ve got a problem in psychology where it’s all very Western dominated. You know, we use the DSM 5 largely, which is American, as like the manual for diagnosing, you know, psychiatric disorders and so on. I teach clinical mainly, but, you know, it’s generally, there’s a Western dominance.  And that would actually be a really interesting way to get them to think about where are these other perspectives and how like you’re saying, there’s sort of a moral question of using ChatGPT because it’s regurgitating stuff that’s overshadowing, you know, other important voices that aren’t being heard. 

Paul Gilbert: Yeah. And it’s also an opportunity to get students to think more about data infrastructures. Right? Because whether you use ChatGPT or Claude or Lama or whatever or not, I think we all could do with understanding a bit better how search works, how data infrastructures are put together, where things are findable or not. Right? 

So centring that means there’s a chance to reflect on that. And it’s not, you know, I’m not a big fan of AI for various reasons. We might go into it later. But rather than just standing up and going, this is awful, you can say, look, here are all these perspectives that are really important. We’re going to get to grips with them in this course, this is not what you will get if you ask ChatGPT for an answer. And it’s also part of a way of reminding students that they are smarter than ChatGPT, right? And I think that’s really important because there is some research, I think some people in Monash University in Australia and elsewhere looking at the kind of demoralising effects of people, so I think what’s the point of even trying? Right? 

I can do this and it can spew out something as good as what I can do. And it can’t. Right? And it can’t if you structure your questions right and if you get your students to think about its limitations. I don’t want to talk too much, but, just one thing that’s worth saying, and, Simon knows about this as well because he did some videos for us, but one of the ways that I try and get this across to the students is that we have this big, special collection in the library, the British Library for Development Studies legacy collection, and it’s full of printed material produced by scholars from Africa, Asia, Latin America in the sixties, seventies, eighties. A lot of it doesn’t exist in digital form. Right? It hasn’t been published as an eBook. You might find a scan somewhere.  Right? Some of that knowledge doesn’t even turn up in the Sussex Library Search. Right? But it’s there in the stacks or in the folders and, you know, there’s a huge amount of insight and history captured in there that the Internet doesn’t know about. Right? Which means ChatGPT will never know about. And, you know, yet that reminds us that there are limitations even to Google searches and Google Scholar and everything, but, you know, the problems of LLMs are that on steroids. Right? So using that as a chance to show people why offline materials and stuff that isn’t widely disseminated or available open online isn’t, you know, to be discounted, and there’s a lot of valuable knowledge in there. 

Heather Taylor (08:07): So same question to you then, Jacqueline. Could you start by telling us a bit about what you teach and how, sorry, how you see generative AI changing the way students approach your teaching? 

Jacqueline De Beaudrap: Alright. Well, I teach a couple of modules in informatics. And informatics, for those of you who don’t know this term, it’s another term for computer science. But, I end up teaching the introductory mathematics module for computer science where we try to introduce to them all the basic mathematical concepts. That’s the name of the module, Mathematical Concepts, that they might need throughout their career.  And we’re not going to cover absolutely everything when we do that, but we try to cover a lot of ground with a lot of different ideas and how they connect to one another. And I also teach a master’s module in my own research specialty, which is quantum computation. And while these two things might seem a little bit different from one another, the fact that quantum computation, so setting aside sort of any excitement that might come with that, it’s something which you can only really come to grips with if you have a handle on a large collection of mathematical concepts. So, similarly to Paul, I don’t really know precisely how it would affect how they engage with either of these modules. I see some signs possibly for, what my modules were, the assessments that maybe, you know, the strange inconsistency sometimes with which the students, not only answer questions, but whether they are able to answer a question well or not, even if they’re very similar questions, sometimes has me scratching my head, but Idon’t try to infer anything about that. 

But it’s sometimes signs about, you know, basically, when I wonder whether or not they have used a large language model in order to generate answers, it’s a question of basically how they are trying to engage with the subject in general. And, you know, if they’re relying on other tools to try to, basically learn about the subject matter. It’s very good if students try to use other materials, other sources from which to learn about a subject. But there’s the question of, you know, how will they can sort of judge the materials that they choose to learn from. 

I try to curate the approach in which, as I, you know, all teachers do, all lecturers do, try to curate a particular perspective, a particular approach from which they can learn about a subject. And if they use alternative sources, this is good. You know, a very good student can do that or somebody who might happen to find my explanation a bit puzzling. It’s much better that they look for other solutions, other approaches to learn than just sort of struggle along with my strange way of looking at things. But, ultimately, it does require that they be able to sort of critically evaluate what it is that they’ve been shown. You know, if they do use AI, if they do use a large language model and its explanations, first of all, they’re going to come across basically the popular presentation, which not only might be biased, but for a complicated subject might simply be wrong. It might lean very heavily on very, very reductive, very simplified presentations that you might see, for instance, in a popular science magazine, which is the bane of quite a few of my colleagues and me in particular. The fact that these things get collapsed, the fact that they get just flattened down into something where you have something that sounds like a plausible sound of words, a plausible thing that somebody can say but actually has no explanatory power. Not only is it sort of not the full answer, it’s that you can’t use it in order to understand it, but the students, whether or not they can recognize something like that, is something that I worry about, for mathematical concepts of the more elementary stuff, the more elementary module that I teach. So it’s possible that they might use a large language model to try to generate examples of something or another, but there’s the question of how reliably the large language model is generating things. Like, I actually haven’t kept on track of just how good language models are doing arithmetic, but if they can’t reliably sort of do any mathematics, then how are they going to learn anything from the large language model, let alone anything which has got more complicated structure, from which, you know, they’re trying to learn this slightly nuanced thing, even if it’s at the most elementary levels. The only way that you really get to grips with these techniques is by encountering it yourself, and, you know, so a large language model maybe, is going to be a little bit more successful at generating elementary examples of things because there are certainly mathematics textbooks aplenty, many examples from which you can draw some sort of corpus of how you explain a basic concept, but if they’re being sort of chopped and changed, how do you know that you’re going to get some sort of a consistent answer? This is the thing that I’m concerned with. I don’t know precisely how often it comes up for my students, but I do hear them just sort of casually referring that they will look up something by asking ChatGPT. It makes me wonder what the quality of information that they’re getting out of it is. 

Wendy Garnham (13:11): I suppose that’s similar to the Coding in R that we see in psychology students, so they will often resort to using ChatGPT, and quite often it gets it wrong.   

Heather Taylor: And then they get flagged. They’re very good at flagging it in the research.  The research methods team are very good at identifying when AI did it, basically. But, yeah, I’m assuming because it just does things a weird way. You know? Yeah. 

Paul Gilbert: LLMs are astonishingly bad arithmetic. Like, almost amusingly bad. I think they can kind of cope up to, like, two, three figures and then you start, because it’s again, it’s predictive text. It can’t do maths. And I think one of the, like, really important things about bringing this into the classroom is, you know, part of what we mentioned earlier about, you know, students might lose confidence or lose motivation because they think, oh, ChatGPT can do it, I’ll just look it up on there. But there’s a lot of things that it’s super dumb at. Right? And I don’t think we should shy away from saying it’s really dumb at a lot of things, and we shouldn’t rely on it, and think about why that is. And it’s because something that is good at predicting the likely next word based on the specific set of words that it was trained on can’t do a novel mathematical problem, and there’s no reason why you would think it should. And there’s so much hype that equates large language models with human cognition that people seem willing to accept it can do a whole bunch of things that it can’t. Right? Even down to really trivial, sort of slips, like assuming it’s a search engine. Right? But it’s stuck in time. It can only answer things in relation to its training data. It’s not something that can actually search current affairs. Right? And that is something that is kind of surprising both how some students and some colleagues aren’t fully aware of that, which I think needs to be, like, the super basic minimal starting point is that people need to understand the data infrastructures that they’re messing with and what they can actually do and not do. 

Heather Taylor (15:18): You know, it’s really agreeable as well. So it depends on how students phrase questions or if they phrase it almost, is this right? Or, you know, because if I told it, I asked it to tell me that -this shows my boredom. But I asked it to tell me what the date was. Let’s say, for example, it was the June 19, and it told me, and I said, no, it’s not, it’s the June 20. And it said, oh, I’m so sorry. You’re right, it’s the June 20. I said, no. It’s not. It’s the June 19. Why are you lying to me?  And I asked it, why is it so agreeable? And it was like, this is not my intention. You know? And it’s like, it’s really, this is a thing. You can tell it. You can feed it nonsense, and then it will go, yes, that nonsense is perfect. Thank you. You know? 

Paul Gilbert: Yeah. But this sorry. Like, to go back to I mentioned earlier Emily Bender, the linguist who writes a lot about large language models. And something that she says I think is really important is they don’t create meaning. If you impute meaning from what they create, that’s on you  Right? But these are synthetic text extruders that produce statistically likely strings of words. 

Heather Taylor: So if I’m saying something, you go, no, that’s probable then.  

Jacqueline De Beaudrap: It doesn’t sound obviously wrong.  

Paul Gilbert: But if you take that meaning as meaning that it can think, that’s on you. Right? It’s just a thing that responds to prompts and spews out words and it looks like it can think but it can’t. 

Jacqueline De Beaudrap: That’s been going on for ages as well from the earliest chat bots and like back to the 1960s, Eliza, where a lot of people were convinced right from the very beginning that they were talking to somebody who was real or that something that was actually thinking, this might have been partly because of the mystique of computers in general. You know, the mystique of computers changes, but people still remain impressed by computers in a way which is slightly unfortunate, I think. Like, they’re very useful devices. Speaking as a computer scientist, I mean, I enjoy thinking about what a computer can help me to do, but it’s good to have an idea of what is realistic to expect from them. And people have often sort of been looking for it to be something with which they can talk and even something that was just responding a sort of to a formula, something that in the 1980s everybody could have on their personal computer, this is not something which is very deep. 

People are looking constantly for something to be relating back to it as a person. So you always have that sort of risk, but yeah, if it’s not actually accessing any information, if it’s not using the information in any particular way that could actually produce novelty and for which you can have confidence, that that novelty might mean something because it’s in some quarters correspondence with a model informed by the actual world around it. You always have that risk of people imputing upon it a depth, a meaning, and a usefulness that is far beyond what it actually has. 

Wendy Garnham (18:10): I think we’ve touched on this a little bit, but Paul, do you think generative AI is affecting how students value subject expertise? And if so, in what way and what impact does it have? 

Paul Gilbert: It’s a really good question and it’s quite a hard one to answer. I think, you know, you can imagine a risk where people think, oh, what’s the point in having a specialist because I can just ask – it can tell me everything. Right? But we know it can’t, and I think a lot of students are pretty switched on to that. And, again, I think this is why it’s important to embed some of that, like, critical data literacy and critical AI literacy into the classroom. Just to pick up on something Jacqueline said about whether or not these large language models can produce novelty, produce new knowledge that is meaningful, I think it’s also worth thinking a bit more deeply about what we mean by subject expertise, which isn’t just having access to loads of references and being able to regurgitate stuff. Right? Leaving aside the fact that, large language models often get references wrong and make them up, right? Let’s just pretend they can do that. 

That’s still not what subject expertise is. Right? And a lot of it is about developing certain styles of thinking, certain critical capacities, abilities to see connections, right, in the social sciences. In in one way or another, a lot of disciplines talk about the sociological imagination or the ethnographic imagination or the geographical imagination. And it’s about a certain way of thinking and making connections. And having a kind of imaginative capacity that comes along with subject expertise, I think, is really important. There’s a bunch of work that’s been done by some people in Brisbane, in Australia, where they have queried a whole bunch of different chatbots and different generations of the same chatbot and so on, to ask about ecological restoration and climate change. And aside from the stuff that by now, I think, hopefully, most of us know about these large language models, that it returns, like, 80% of the results are written by Americans and white Europeans, that it ignores results from a lot of countries in the global south that do have a lot of work on ecological restoration, all these kind of, things which we call biases, but I see sort of structural inequities in the way these models are trained. I think one of the most interesting things they found, and again, it makes sense based on what we know about these models, is that they never tell you about anything radical. Because again, it’s a backwards looking probabilistic thing. Right? What’s the best way to deal with this problem? Okay. Well, it’s going to give you something from its training corpus, which is probably based on the policies that have been done in the past. Except if we, you know, we are facing a world of, runaway climate change and things that have been done in the past are not the things we need to keep doing. Right? And it just would not answer about things like degrowth or agroforestry or anything that, you know, I wouldn’t even think of agroforestry as particularly radical, but, you know, it’s not mainstream. Right  It’s not been done a lot before. And so they just don’t want to talk about it. Right? And having the capacity to look at a problem, know something about what happened in the past, think about what the world needs, and be creative, be innovative, and have an imagination is something that a lot of our students at Sussex really have that capacity, and that absolutely cannot be replaced by a large language model. Right? And I think, as well as, emphasizing that ourselves, we need to encourage them to become aware of that capacity in themselves. Right? That, you know, you might feel overwhelmed or demotivated or you like, you want to ask ChatGPT or Claude or whoever, but, like, both what we are trying to, get across as subject expertise and what we want them to leave with massively exceeds anything that this kind of very poor approximation of intelligence can offer. 

Wendy Garnham: Yeah. It sounds as though there’s a big role to play for like active learning, innovation, creativity in terms of how we’re assessing students and how we’re getting them to engage with this subject material I guess. So that’s music to my ears. 

Heather Taylor (22:22): Also in the same respect as that, we’re not meant to be information machines, you know? And I think if a student came to uni hoping to meet a bunch of information machines, well, they’d be wrong. Hopefully, not disappointed because it’s better than that. But, you know, I think also that, you know, teachers have the ability to say when they don’t know something and to present questions that can help them and the students try and start to figure out an answer to something. And, you know, I really love it actually when I’ll make a make a point or an argument in a workshop. 

I had this last year with one of my well, last time, one of my foundation students, and I was very pleased with my argument I put forward. I was showing them about how you make a novel but evidence based argument, you know, so where you take pieces of information, evidence from all over the place to come up with a new conclusion. And I was very pleased with this. Anyway, the student of mine, she was brilliant. She rebutted my argument, and it was so much better than mine. Right? It really was, and it was great. And that’s sort of as a teacher, that’s what you want to see happen. And I think with things like ChatGPT and any of these AI things, they’re not going to do that. They’re not going to encourage that, and they’re not going to know how. You know? They’re not there for aren’t asking questions. They ask you questions to clarify what you’re asking them to do, but that’s it. And I think, yeah, I completely agree with you. Students can get so much more out of their education, you know, by recognising that they’re so much more than information holders.  You know? 

Wendy Garnham: So same question to you, Jacqueline. Do you think generative AI is affecting how students value subject expertise? And if so, in what way and what impact does it have? 

Jacqueline De Beaudrap (24:15): I think it does affect how students value subject expertise. This is something that I see when assessing final year projects or even just seeing what sort of project students propose for their final year project, where in computer science, as you can imagine, a lot of students are proposing things that involve creating some sort of model, not a large language model, but, you know, something that’s going to use AI to solve this and that, where they seem to have a slightly unrealistic expectation of how far they’d be able to get using their own AI model, and in particular, where they’re contrasting this to the effort that’s required in order to solve things with subject expertise, where they seem to think that this is something which is easily going to at least be comparable to, if not match or surpass the things that can come from people who’ve spent a lot of time thinking about a particular situation, a particular subject matter. They think that a machine that’s just by crunching through numbers quickly enough is going to be able to surpass that. And, you know, they, of course, learn otherwise to a greater or lesser extent, basically to a greater or lesser extent that they notice that they have not actually met their objectives, that what they hoped to be able to achieve. The fact – it’s more the fact that they have that aspiration in the first place. Now I mean part of it of course again in computer science, there’s going to be a degree of neophilia. They’re enthusiastic about computers and why not. They’re enthusiastic about new things that are coming about computers and why not. It’ll be just a matter of the learning process itself that maybe some of these things aren’t quite all that they’re hyped up to be. But that’s sort of where they’re starting up from, this idea that just sheer technology can somehow surpass careful consideration. I find that a little bit worrying. 

Heather Taylor: 
Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners? 

Jacqueline De Beaudrap: 
I’m not actually an education expert, so I can’t really say whether it’s likely that there are ways that generative AI can help, come up with ways, you know, to help student learning. I have seen examples where, a native Chinese speaker was trying to use this to translate my notes into Chinese. So, okay, there might be some application along those lines, just trying to find ways of, translating natural language whether we know, you know, a generative AI such as we’ve been thinking about them now to do that well, I’m not in a position to say. Conceivably, as I sort of thought about before, maybe they can be used to come up with, examples of toy problems or, simple examples. I guess you could say supplementing the course materials in order to try to come to grips with some particular subject. 

 
That’s something that I can imagine one might try to use generative AI where it’s possible that things won’t go very badly wrong. Apart from that, I guess I’m sort of, viewing things through the framing of the subjects that I teach in my own particular interests, which basically involve the interconnectedness of a lot of technical ideas and ones that I find fascinating, so I’m going to be extra biased about that sort of thing. About, you know, learning about various ways that you can understand and measure things and structure them in order to get a idea of a bigger whole of how you can solve problems. And you can’t solve problems without being able to have the tools at hand to solve the problems. Okay. Some people might say, okay, maybe an AI can be such a tool, but before you can rely on a tool, you have to know how to use it well. You have to know how it’s working. You have to know what the tool is good at. So even if you want to say, shouldn’t this just be one tool among others, I don’t see a lot of evidence of, first of all, where it is a reliable tool. The thing that you absolutely have tohave of a tool is that you can rely on it to do the job that you would like it to do. Otherwise, I mean, you can use a piece of rope as a cane, but it’s not going to be very helpful to you. Maybe you just need to add enough starch or maybe you should look for something other than a piece of rope. And this just builds upon itself. The way that you build expertise, the way that you can become particularly good at something is by spending a lot of time thinking about the connections between things, by looking, asking yourself questions about, you know, what does this have to do with that? You know, is that thing that people often say really true? Only by sort of really engaging yourself in something like this can you really make progress on that and really become particularly good at something. And by sort of devolving a certain amount of that process okay. Again, there’s a reason why I’m in computer science. There are some things, like summing large columns of numbers. Is it possible that we have lost something by not asking people to systematically sum large columns of numbers? 

Have we lost something important about the human experience or maybe just the management experience by devolving that to computers? Well, there will be some tasks, maybe, where it is useful to have a computer, not speaking of LLMs, but computers in general, having them solve problems rather than having us spending every waking moment doing lots of sums or doing something more or less tedious. There will be some point at which the trade off is no longer worth paying. And I believe that trade off, you know, happens well below the point where you are trying to really come to grips with a subject and with learning. So the only thing that one really wants to have a lot of for learning is a lot of different ways of trying to see something, a lot of different examples, a lot of ways of trying to approach a difficult topic. And beyond that, cooperating with others, that’s a different form of engagement. It’s a way of swapping information with somebody, ideally, people who are also similarly engaged with the subject. It doesn’t help, of course, for you to ask somebody who is the best in your class and then just take their answer. That’s the same sort of problem that one has with LLMs, is relying on something else without engaging on the subject yourself. The most important thing is to sort of try to draw the line at a point where the students are consistently engaging with the learning themselves with the difficult subjects. And if the things that we, if the resources that they usually have to hand aren’t quite enough, well that’s not necessarily something that you solve with LLMs. You can solve that problem by providing more resources generally, and that’s a larger structural problem in society, I think, that maybe can be drawn on, but that’s not quite about what LLMs are about. 

Wendy Garnham (31:07): It sort of sounds a little bit like we’re saying that the purpose of education is changing so that it’s more about encouraging or supporting students to be creative problem solvers or innovative problem solvers. Would you say that is where we’re heading? 

Jacqueline De Beaudrap: I don’t know if I can say where we are actually heading. I think, obviously, it’s good to be a good creative problem solver. And there has always been, I guess there’s the question of whether or not originally we spent more time doing things by rote. You know, you solve an integral and you don’t ask why you’re solving the integral. We’ve had tools to help people with things like, you know, complex calculations, a little, you know, slightly annoying picky fiddly details for a long time. Like so for example, a very, very miniature example of the same thing that we have with LLMs in mathematics is the graphing calculator, where you had something that was a miniature computer. It wasn’t wouldn’t break the bank, although it’s not as though everybody could afford it. But you could punch into it, ask it to solve a derivative, to solve an integral, to plot a graph. All sorts of things that, once upon a time were solely the domain of people, like before the 1950s and that, you know, particularly how to sketch a graph was something that was actually taught. Here are the skills, here are the ways that you can not precisely draw what the graph is like, but to have a good idea what it’s like, it would give you a good qualitative understanding. And even now, even though I do not draw very many graphs myself, the fact that I was taught that means that I have certain intuitions about functions that maybe somebody who hasn’t been taught how to do that, wouldn’t have. Now does that mean that I think that we should, you know, basically everybody should always be doing everything all the time with pencil and paper? No. There will be some sort of trade offs. 

It’s a question, you know, with the creative problem solving. It’s good to have people spend more of their time and energy trying to solve problems creatively, to think about things, engage with them, rather than being constantly boiled down, you know, caught up in doing things by rote. Now as far as the creative part goes, well you know, if you put the students in a situation where they are tempted to put the creative part into basically the hands of something which or the digital hands, the metaphorical hands of a text generator, then it’s not going to teach them how to engage with their subject in a way that they could deal with it creatively. It’s like most things that you have to know what the rules of the game are before you can interpret them well or before you can know where you can break them. If true creation is coming up with something which hasn’t been seen before where you realize actually we can do things differently and, of course, in mathematics and computer science, you’re concerned also with whether it’s technically correct. 

The rules that we have in place aren’t in place because it is the only correct way to do things. It is because the way that we can do things that we are confident will work out. That doesn’t mean that is the only way that things can be done. But you can only see these things if you know how the system was set up in the first place. If you know how to work with the existing tools that we have through the mastery of them, can you figure out how you can do things differently in a way that will work well? And if you always basically devolve all the hard stuff, the so called hard stuff, to a computer that’s not, of course, not going to tell you anything new, certainly not if it accidentally spits out something at random that, is sort of a novel random combination of symbols, it won’t be able to tell you why it should work in any way that you can be confident of. You need to be able to have the engagement with the subject itself in order to even recognize anything that could be an accidental novelty. 

Heather Taylor (34:59): Same question to you then, Paul. Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners? 

Paul Gilbert: So my instinctive answer is, honestly, no. I don’t think there are benefits. Right? Maybe there are, but I haven’t been shown them yet. Right? 

And I think the reason I want to say that is because there is so much hype and so much of a framing of inevitability about these discussions that frequently people are saying, well, if we haven’t figured out how they’re going to improve student learning, don’t worry, it’ll come. Right? And we’ll know. If you’re going to change pedagogy and change the curriculum and change how we teach, show us first how it’s better. Right?  I think there’s a reasonable way around to do that. Tech journalists, I think it’s Edward and Waiso, or possibly another journalist, talks about the way people tend to treat LLMs as stillborn gods. In other words, these are like perfect all powerful things that just aren’t quite there yet, right? So if you just hang on, they’ll improve our learning. Right? And we saw this in the discussions we’ve had at the university that the Russell Group principles on AI make a whole series of claims about how AI can improve student learning without a single example or footnote or reference. Right? And yet we can find pedagogical research showing that it can actually undermine confidence, undermine motivation, all these kinds of things. Right? So, until people can show me that it’s good, my instinct is to say no. 

And I know, you know, to dig into that a little bit deeper, I know there are claims made about how it can be important for assistive technologies. And it might be that that’s true in certain cases. But there’s equally evidence that it can be disabling. And there are some colleagues in Global, Lara Coleman and Stephanie Orman, working on this. And part of this is also about the what it means to learn.  

So something I sometimes go through with my students when I’m trying to explain what we want from them, why giving example after example after example after example won’t get you a first.  And I show them that Bloom’s Taxonomy Triangle. I don’t know if you come across that in teacher training and stuff, where it starts at the bottom, the bottom layer is like recall. You show you evidencethis learning by just reproducing something you’ve memorised, and it goes on to understanding.  You show, you know, what it means, how it works, and then you kind of apply your knowledge to new situations, draw connections between it, evaluate it, create something new. Right? And so this is a useful tool to show, like, if you keep just building loads of the bottom layer with more examples, you’re going to not get higher than a 2:2. Right?  

Heather Taylor: That’s very good to know, by the way.  

Paul Gilbert: And equally, if you go straight to, like, I’m going to create something new at the top, but you haven’t built a foundation of knowledge, then you’ve got a rubbish pyramid that will fall over. 

Heather Taylor: What’s that called again? 

Paul Gilbert: Bloom’s Taxonomy. It’s now quite old, like, constructivist pedagogy example. 

Heather Taylor: It’s very useful, though. 

Wendy Garnham: Put it into ChatGPT. 

Heather Taylor: Yeah. I like it –  ‘You’re just at the bottom, mate’. 

Paul Gilbert (38:16): But it kind of it’s also useful to show that just learning things and regurgitating them is not what we’re looking for. That’s not the kind of higher processes of learning. Right? And I think if I’ve been a bit disappointed to see people embrace the idea that, LLMs can be really useful in, creating study guides or creating summaries of articles and so on. So, well, because what you’redoing there is you’re outsourcing the process of creating the understanding, creating the knowledge object that moves you up the pyramid. And then if instead of understanding a text, you asked ChatGPT to summarise it for you, you’ve essentially just stuck yourself in the remember and regurgitate bit lower down because you’ve made it shorter, but you’ve not done that work to generate the understanding and the applications and make those connections with other readings you’ve had and the kind of things Jacqueline was talking about. And so I’m really cautious of a lot of the hype and inevitability framing around the idea these are going to be great for assistive technologies. They’re going to improve people’s learning and stuff. If that’s the case, evidence it before spending loads of money and reshaping higher education around it. And I think, you know, something else I wanted to add on to that is, you know, you were also talking a bit about, there are potentially maybe cases where LLMs could be useful. Right? You could create little problems or you know, okay. Sure. And there was an article recently in the Teaching and Learning Anthropology journal that sort of made this case. And superficially, when you start reading it, you think, okay, they’re talking about something similar to what we’re talking about here. Right? They’re saying, we want to reject this model of education that’s just regurgitating facts. Right? I think we’re all on that page. And ChatGPT is an opportunity to get students to interrogate ideas, to ask what’s wrong with things, to engage in dialogue. And they even use the language of, the Brazilian educator Paulo Freire to criticise the banking model, right, where you just pour ideas into the passive student’s head and they vomit it out. Right? None of us want to be doing that. But what kind of got me a bit frustrated with that paper is the other half of what Paulo Freire was talking about – banking education is bad. What you want is critical pedagogy, where people come to an understanding of their place in the world and the structures that create oppression and unfreedom so they can use that knowledge to make a better world to achieve liberation. Right? And it genuinely boggles my mind that people will invoke Freire in a discussion of ChatGPT and not mention the political economy of AI. 

Right? It’s deeply frustrating that there’s this sort of sense that, oh, well, it’s incidental to this whole discussion about education that we’re talking about some of the greatest concentration of wealth and power in history. Right? That is also premised on an absolutely insane expansion of energy and water usage. And it’s hardwired into the model. Because of this understanding that these models are, you know, stillborn gods and they’re going to be perfect when we get them right, that’s the justification for Zuckerberg, Sam Altman, all of them. Their model is all about scale, more and more, bigger and bigger, more data centres, more, in video processing units. And that means more energy usage, and it means more water to cool those.  

And so I think last year, there was a study that showed worldwide AI data centre usage emitted the same amount of carbon as Brazil, right, which is a big agro industry emitter. There have also been studies from UC Riverside suggesting that by 2 years’ time, the worldwide data centre freshwater usage, to call it, will be about half of UK water usage. Right? And that’s concentrated in certain areas, typically in low income areas. And all of those studies were done before over the last few weeks. Zuckerberg’s announced 23,000,000,000 investment to expand data centres. OpenAI is trying to spend 500,000,000,000 building data centres, mostly running off fossil fuels. Right? Mostly located in low income, often water stressed environments. Right? So if that is the pathway to finally getting it right so that this model works, right, and, oh, yeah, we’ll iron out those hallucinations and it won’t give you fake references anymore, genuinely, what is wrong with you that you think that is a good pathway to an educational future? Right? Like, I don’t understand it. And Sussex has these sort of ambitions to be one of the world’s most sustainable universities and everything, and you can’t bracket that and pretend like it’s not applicable to engagement with technologies that have this political economy, that have this political ecology. 

And that political economy follows exactly from the claims about what they can do. Right? All of the claims about their magical power is based on scale. Right? These things are powerful because we’ve trained them on more data than you can possibly imagine. And we have more data centres, powering the models that are responding to more queries than ever that, you know, you couldn’t even imagine the scale of it. Right. Great. That pathway to refining these models is essentially ecocidal. Right? This is before you even get into the labour stuff and the fact that, you know, I really dislike the language of the Cloud because it implies a virtualism. Oh, The Cloud. Right? The Cloud is made of copper and plastic and lithium and silicon and stuff that is ripped out of the Earth.  

Jacqueline De Beaudrap (44:02): So It invites you to think about it in an extremely vague way. 

Paul Gilbert: Yeah. Right. Whereas, actually, what it is data centres largely in low income water stressed communities. Right? The same day that, last week, there was an FT leader, Zuckerberg, seeking 23,000,000,000 from private equity to expand his data centre. He’s just, there’s a story on a tech journalist web website 4 0 4 Media on the same day, but a community in Louisiana, one of the poorest towns in the state, that is going to have their utility bills go through the roof because a bunch of data centres are being built by OpenAI, which require construction new gas plants, and all those costs have to be paid for by someone, and it’s probably not going to be OpenAI. Right? Yeah. That is the sort of backstage on which these magical LLMs are unfolding. Right? And I think if you’re willing to have a discussion about the educational benefit of these tools without situating them in that political economy, that political ecology, you know, certainly, as someone who works in a kind of development studies department, that’s like a massive, you know, intellectual and moral failing.  

Heather Taylor: So, essentially, even if so, I mean, like you were saying, it’s all hypotheticals about whether eventually they can get AI to be magic. And there’s also obviously lots of, even before you go thinking about the environmental implications, there’s lots of implications of if you could make AI that perfect, what’s going to happen to people’s jobs and, you know, there’s all that side of things as well. But so, essentially, even if, this is a question, even if AI were to be magic eventually, yeah, and do everything that they want it to do or that we theoretically want it to do, I have no idea what I want it to do, The cost would be the world burning. 

Paul Gilbert: Yeah, some people don’t want to have serious discussions about that. That’s fine. But then just, you know, you’re not a serious person, I guess. 

Heather Taylor: I didn’t – I knew that you had environmental – I honestly, this is like this shows my ignorance, really. But I knew that there was environmental consequences to AI. I did not know it was this deep or where the consequences were being worst felt, which is horrible. 

Paul Gilbert: Yeah. And, you know, it’s utterly bizarre that a lot of data centres are being built in some of the most arid parts of The US, which are already water stressed. So aside from the massive ecological consequences of drawing down further freshwater, diverting it, I don’t know. What has happened historically in The US when you’ve had water diversion for industry, especially agro industry, is massive fire risk. Right?  We’ve all seen what happens to California in the summer now. And now loads of data centres are being built across the arid parts of California, and they will not work. They will catch fire and shut down, and the models will stop working if they’re not cooled with vast quantities of fresh water. Right? So the guys building them have every intention of cooling them down with vast quantities of fresh water. 

You don’t spend $500,000,000,000 on a data centre package if you are comfortable with it melting straight away. Right? So there is a genuinely huge ecological threat and the livelihood threat associated with that, which almost always lands on the most marginalised communities because, you know, when people dump massively polluting and water stress creating industries, they don’t usually do it in the most affluent neighbourhoods, right, because those people are well organised and well networked and everything. And, yeah, this is a serious part of it. And I think that the rush to scale that has seen this sort, of everyone’s just seems to have accepted that the only AI future is the one that we’re allowing Zuckerberg and Altman and people to lay out for us, which is 3 or 4 giant companies compete to buy up all of the world’s NVIDIA chips and create more and more data centres. Right? Maybe there are things that AI can do differently that don’t require this more and more bigger and bigger scale operation. But if that’s the path we’re going down, right, 

Heather Taylor: And there’s already an energy use problem, though, isn’t there? There’s if there’s already an energy use problem, you know, so we’re kind of using energy for something we don’t need because we didn’t have it a little while ago. So it’s not something that we need. You know? Yeah. So even before we think about making it bigger, the fact that it’s even in existence now is a quite a concern. 

Paul Gilbert: And this is also why I find this sort of inevitability frame that people use to talk about this so troubling. Right? Whenever someone uses the framing of inevitability to talk about a new technology, they’ve got an interest in it. Right? Because it hurries things up and it presents people who have questions as, you know, just getting in the way because this is going to happen, right?  So get on board. And this is explicitly what we’re hearing from our local MP, our science technology administrator. 

Heather Taylor: That’s what I thought, and I’ve not got any stakes in it.  

Paul Gilbert: Things are made to be inevitable because powerful actors tell you they’re inevitable. There’s nothing inherently inevitable about this.  Most people don’t even know what it can do when it works, and yet we’re accepting it’s inevitable. Right? I think that’s 

Wendy Garnham: Is there another side to the inevitability, though, which is that it’ll eventually fold in on itself because eventually you’ll be feeding the machines with information that it itself has generated, I mean, is that a possibility? 

Paul Gilbert: Possibly. I mean, the Internet is already so full of slop. Right? Just AI generated garbage. And there are examples of that.   I think I can’t remember who it was earlier in the discussion. I was talking about translation. Right? And some of the folks at Distributed AI, research institution, Timnit Gebru and colleagues, looked at these claims that Meta and others have made about massive natural language processing models that could translate 200 languages. Right? 

And they looked at a series subset of African languages, which the research team spoke, and they found that it was really bad. Right? But it was also bad because it had been trained on websites that were translated by Google Translate. Right? And then so when you gave it vernacular, or vernacular twee, it was just absolute rubbish came out. Right? So that’s already happening. And then you think, well, what was that? Like, what is it for? Just the kind of like you were talking about neophilia. Right? People want new things. People want to move fast and break things – but why? What benefit is this going to bring us and is it worth it? 

Jacqueline De Beaudrap (50:40): Yeah. Yeah. It’s the slogan of a particular company to move fast and break things, and they had reasons for wanting to do that. It’s because it made them money. 

Paul Gilbert: Yeah. Yeah. And speaking about those companies, you know, you’re saying this is a recent thing. A few years ago, the Silicon Valley giants were presenting themselves as green. Right?  Go back a couple of decades. Google’s slogan was Don’t Be Evil, which I think just, you know, became funny after a while. But, like, you know, a few years ago, Microsoft was promising they weren’t only going to be, like, carbon neutral. They’re going to become a negative. Right?  Really leaning into renewable energy, carbon capture and storage, which is a whole other story about how it may not actually work. But, you know, there was this sort of green vibe they were going for. After the kind of boom in , LLMs from the ChatGPT 3 launch, they’ve all just chucked their carbon neutral policies, right, straight out the window and back to coal, back to gas because we need those data centres. Right? Is this a good time to be doing that?  When this is of unclear utility to us, and potentially poses a threat to jobs and can’t do half of the things 

Heather Taylor: By the way, it’s boiling in here. 

Paul Gilbert: Yeah. Yeah. So, you know, can we help students use these tools in ways that are ethical? Well, we’ve got to ask, are these good for our students? Right?  Can we actually evidence that, not assume it because someone has told us it’s inevitable? And once we’ve figured that out, is it worth it? Right? There’s a whole bunch of things we can all do that make our lives easier. Are they all worth it? 

Wendy Garnham (52:15): 
So that brings us to our last question. So, Jacqueline, I’m going to direct this to you first. Obviously, educators will have varying views on AI in higher education. But for now, it is something that we all must contend with. So with that in mind, what advice would you give to colleagues in terms of AI in higher education? 

Jacqueline De Beaudrap: Apart from sort of acknowledging that it’s there and maybe sort of addressing students and encouraging colleagues to address the students to sort of talk about AI and the things that it purports to offer and how it might fall short, the main thing that I would do is basically to not use it yourself, to not feed into it. Like, the more that we try to use these tools in order to cut corners and things like this in our own work, not only is it providing a bad model for the students, like, I think that students can tell when you put in some efforts into anything from designing a script for your lectures or your slide deck or a diagram or something like this. If they see it modelled for them that it is quite normal to just ask a computer to spit something out according to some spec, they’re just going to think of it as a thing that one should do. It’s a thing that I systematically don’t do. I’ve never used any of these tools because I’m not interested in feeding the machine that I think is going to undermine the things that I care about. I would encourage colleagues to ask themselves if they actually want to be using a machine that’s going to have this sort of effect. 

Wendy Garnham: 
Right. Same question to you, Paul. What advice would you give to colleagues in terms of AI in higher education? 

Paul Gilbert: 
I think my answer is quite similar to Jacqueline’s. I fully agree. You know, don’t use it and then expect your students not to. Yeah. Come on. I have used it twice to show my second years how terrible the answer to one of the assignments would be if they asked ChatGPT and why they are better than it. Right? And I think what we can do, as well as kind of getting our students to think about the data infrastructures behind this, its limitations, what it can’t tell them, how not smart it is, how destructive it can be. We can also just, I think, put more work into highlighting for our students how much better they are than these models. Right? There’s all this discussion of, like, oh, you know, we’re going to replace, like, lawyers and radiologists and stuff. And, you know, I’m not sure how true that all is, you know, if you replace all the junior lawyers so no-one has to read through court documents who’s in five years going to be a senior lawyer. Right? You gotta have the foundations. Right? 

So, again, there’s a lot of inevitability frame and hype, which I think we need to cut through. But, also, like, we have a lot to offer in higher education to our students, and they have a lot to offer us that cannot be replicated, reproduced, or displaced by ChatGPT and finding space in the classroom to emphasise that. Right? Even if that is explicitly saying, look, like, this is garbage. It’s powerful and it’s big and it’s fast and it looks shiny, but you guys are smarter than it. Right? And I’m not just saying that genuinely, like, my students produce better stuff than AI could, and I think that’s true for a lot of us, right? And we need to give them that, like, trust and meet them on that terrain rather than assuming they’re all, you know, just itching to fake their essays. Yeah. And then, you know, some people will, and that’s always been the case, and, you know, it will continue ever thus, right? There’s always been plagiarism and personation. We can’t stop it, right? But we can, highlight the things that AI can never do and encourage our students to value that in themselves. 

Jacqueline De Beaudrap: There’s something that I could add, by the way. So there’s something that I’ve been thinking about that I – it feels a bit weird to ‘yes and’ what Paul was saying about the ecological and the sociological impact, which as far as I’m concerned, should be the sort of the conversation stopper in terms of the ethical use of AI, but about the notion that these are stillborn gods that we can just work to improve them. Well, I mean, this is a part of the technophilia, sort of the technophilic impulse in the computer industry generally. We can sort of look at the other things that happened at scale. For example, Moore’s Law, where computing power became larger and larger, computers became faster and faster. This didn’t always make our software better. In fact, it made it worse in a lot of respects because people stopped valuing writing code well. So even as things scale up, this isn’t going to be a guarantee of an improvement in quality. In fact, if the past is any indication, things will get worse. So even after burning the world, we may not have anything even particularly nice to show for it. 

Wendy Garnham: Simon, as a learning technologist, do you want to add anything before we close our podcast? The role of AI in higher education. 

Simon Overton (57:34): I think, the only thing that I would say that I feel is perhaps a little bit hopeful, and it’s not just limited to education, is that I believe that the use of AI and the proliferation of slop, great band name, by the way, is going to lead us to value things that are real a lot more. When I was at university, I really loved the essay, The Age of the Work of Art in the Age of Mechanical Reproduction, I think it’s Walter Benjamin, which was that people were worried that if we could produce posters of, you know, the Van Gogh Sunflowers that we wouldn’t value the original anymore. But that’s not what happened. It actually became more and more valuable. So I think and I hope and I believe that it’s all quite new and quite scary for us now, but I think that it will encourage us to value the things that Paul said just now that we can have and that can come out of the time and the relationships that we establish it in higher education. So and I think that ultimately that’s probably a good thing even though it looks kind of scary. 

Heather Taylor: I would like to thank our guests, Jacqueline and Paul. 

Jacqueline De Beaudrap: Thank you. 

Paul Gilbert: Thank you. 

Heather Taylor: And thank you for listening. Goodbye. This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes, as well as articles, blogs, case studies, and infographics, please visit blogs.sussex.ac.uk/learning-matters

Tagged with:
Posted in Podcast

Call for Participation- Sussex Education Festival 2026


Two people stand in conversation at the 2025 Sussex Education Festival, in front of a research poster titled “Understanding the student experience of Black, Asian and Minority Ethnic students on campus.”

*Deadline extended until Friday 6 March*

It’s that time of year again!  

On Friday 8 May 2026, The Sussex Education Festival will return for its fourth year. The Festival provides a supportive and collaborative space to celebrate and share our experiences, research and reflections on teaching, learning and assessment here at Sussex.  

We encourage all colleagues involved in education (in any capacity), to consider submitting a proposal. We also welcome co-presenting with students and have some student participation vouchers available.  

This year’s festival has three themes and a variety of ways you can participate. 

Themes 

The three themes for this year’s festival can be interpreted broadly. We’ve suggested some topics below to get you started, but please see these as suggestions rather than limitations: 

Education for Progressive Futures 

  • Interdisciplinary teaching and learning 
  • Reimagining the ways we teach in changing educational landscapes 
  • Equipping students to make a difference 
  • Lifelong learning, employability and citizenship 
  • Digital literacy and skills 

Impacting Student Experience 

  • Building engaged learning communities 
  • Encouraging and listening to student voice 
  • Belonging and community building for diverse student populations 
  • Student mental health and wellbeing  
  • Designing and implementing impactful scholarship projects 

Transforming Assessment at Sussex  

  • Authentic assessment and assessment for learning 
  • Involving students in curriculum and assessment design 
  • Responding to, and evaluating generative AI 
  • Inclusive and accessible assessment for all learners 
  • Supporting students to develop agency in assessments and feedback 

Presentation formats  

There are two presentation formats to choose from: 

1: Work-in-progress lightning talks (7 minutes) providing short reflections on current practice or projects. 

2: Longer presentations reflecting on outcomes of pedagogic developments, scholarship or research in your chosen area (15 minutes).  

No plans to present? Keep an eye on Learning Matters for details of our innovation showcase- there may be other ways for you to get involved in Sussex Education Festival 2026.  

How to submit your proposal 

Please submit your proposal to our Call for Participation form.  

The deadline for submitting is 17:00 on Friday 6th March. 

You should receive a response from the Education Festival Steering Group by 20th March. 

If we haven’t convinced you to submit a proposal yet, here is some feedback from presenters in previous years: 

‘I wanted to write to say what a brilliant day I had on Friday! So many of the talks were inspiring, and the general atmosphere was great all day.’ 

‘It was a really rich and stimulating day of ideas and fascinating discussions. It has helped to spur several ideas of things to try to improve my own teaching.’ 

I just wanted to say how much I enjoyed the whole day – such a rich and fabulous programme, sparking so much fascinating discussion… and helped to launch a few new collaborations and friendships with colleagues across Sussex.’ 

Posted in Events

How can a process approach to assessment help address the impacts of AI?

by Dr Sarah Watson and Kamila Bateman, Academic Developers in Educational Enhancement

A process approach in teaching is not a new concept. It was first introduced by Stenhouse in 1975 as an alternative to the product model. It concentrates on teacher activities, learner activities and the conditions in which learning takes place. In focusing on the nature of learning experiences, rather than specific learning outcomes, the process model emphasises means rather than ends.

Although it predates the age of artificial intelligence, can Stenhouse’s approach offer a fresh perspective on AI in education?

This post highlights how academics at the University of Sussex and beyond are adopting process-oriented approaches to assessment, not only to reduce over-reliance on AI, but, more importantly, to strengthen the pedagogical purpose and value of assessment for students.

Shifting from ‘product’ to ‘process’

Shifting the focus from product to process allows us to foreground thinking, development, and learning. It also provides room to explore how AI might support students during this process, rather than replace it.

One effective approach is embedding writing practice directly into teaching. At Sussex Law School, Verona Ní Drisceoil (2023) describes dedicating ten minutes of each seminar to structured writing activities. This helped students develop their writing skills incrementally and prepare for end-of-term essays. Student feedback was overwhelmingly positive, with many reporting that the practice demystified academic writing.

Teaching on the foundation year at the University of Sussex, Sue Robbins (2023) similarly argues that when students understand the writing process, the perceived threat of generative AI diminishes significantly. Embedding academic skills into teaching therefore acts as a powerful deterrent to academic misconduct. As Robbins notes, our choices in response to AI are to avoid it, outrun it, or adapt to it. Given the rapid development of generative AI, we have a responsibility to support students in learning how to use these tools responsibly, both during their studies and beyond.

Another productive strategy is encouraging students to treat AI as a writing coach rather than a content generator. Alicja Syska (2025) suggests using AI as a tutor that prompts critical thinking and supports students in producing their best work, without doing the work for them. She advocates collaborative writing in the classroom and rethinking assessment criteria to emphasise original thinking, writing development, and opportunities for peer review.

Fostering engagement with the learning and assessment process

One approach to help students engage with process is to design marking rubrics that explicitly reward idea development and provide opportunities for peer review. Bianca A. Simonsmeier et al (2020) note that such approaches support active, self-directed learning and encourage social interaction and reciprocal teaching, whether through online discussion forums or structured peer assessment.

We can also diversify assessment formats beyond the traditional essay. Introducing reflective components, small-group critical evaluation, collaborative planning, or playful elements can increase engagement and ownership. Denise Wilkinson (2024) suggests using “flipped assignment” techniques, interactive engagement tasks, and collaborative reflection to help students feel more invested in their work. This emphasis on ownership is echoed by Helen Foster (2024), who highlights the role of formative assessment in supporting self-regulated learning and creating more inclusive learning environments.

Another opportunity lies in building on what students already know about AI and how they use it. Tim Requarth (2025) advocates assignment-specific guidance that supports a balanced approach to AI use, neither punitive nor overly permissive.

In the Economics department at the University of Sussex, Gabriella Cagliesi (2025) and Carol Alexander (2025) have taken this further by developing customised ChatGPT tools that store module content and are fully integrated into the learning process. These tools function as trusted study aids, enabling students to engage critically with course material.

Cagliesi and Alexander found that their custom GPTs allowed students to explore, question, and critique content outside of class, creating more space during teaching sessions for relationship-building, personalised support, and meaningful discussion.

The rise of generative AI reinforces something we often overlook: Meaningful learning happens through human connection and collaboration. As Syska suggests, thoughtfully integrating AI may allow us to reclaim time and space for deeper engagement with learning and for valuing what we bring as human educators and learners in a digitally dominant world.

Posted in Blog

Making (class)room for AI: Integrating custom GPT into teaching and assessments

A photograph of Gabriella Cagliesi. Smiling in front of a window.
Professor Gabriella Cagliesi (Economics)

Gabriella Cagliesi is a Professor in the Department of Economics and Teaching and Learning Lead at the University of Sussex Business School. Since joining Sussex in 2019, she has championed innovative and inclusive teaching, earning recognition for initiatives that close attainment gaps and enhance student experience. 

With a PhD in Economics from the University of Pennsylvania and over thirty years of teaching experience, Gabriella is research-active in applied international macro-finance, applied behavioural economics, and empirical studies on labour markets and educational choices and policies. She also collaborates on projects that support widening participation and enhance student outcomes.  

This case study illustrates how Gabriella integrates generative AI into her teaching, encouraging students to use AI as a study tool and engage with it critically and creatively.  

1. How do you bring generative AI into your teaching? 

I integrate generative AI through a customised Chat GPT environment, rather than students using open platforms. This involves creating a closed system where I upload teaching materials, define the AI’s role, and set clear boundaries on what it can and cannot do. In seminars, students work in groups using this custom AI tool, and I emphasize ethical use and the risk of hallucinations of even a closed and bespoke AI platform. AI plays different roles across the teaching sessions, such as a teammate (acting as a devil’s advocate), tutor, Socratic teacher, simulator, or podcasting assistant. After each activity, students reflect on AI’s role and submit their interaction logs, which I review and provide feedback on. 

2. What approaches do you use to integrate AI into your teaching, and why have you chosen them? 

I use three main approaches: 

  • Interactive learning design: AI is embedded in classroom activities to encourage experimentation, scenario testing, and critical thinking. 
  • Teaching material development: AI helps create study guides and summaries of teaching, accelerating routine tasks while maintaining transparency with students. 
  • Assessment integration: AI is incorporated into assessments through synthetic datasets and reflective tasks, requiring students to critique prompts and evaluate AI’s limitations. 

These approaches were chosen because they align with my discipline (economics), promote higher-order thinking skills, and prepare students for real-world applications of AI. 

3. What impact has AI had on student learning, curriculum design, or academic practice? 

AI has significantly enhanced student engagement and understanding. Students report that simulations and visualizations help them grasp complex concepts better than formulas alone. It has improved critical thinking, as students learn to question assumptions and evaluate AI outputs. Curriculum-wise, I redesigned assessments to include AI-enabled tasks and reflection components, shifting focus toward interpretation, reasoning, and AI literacy. For me professionally, this work has led to invitations to present at conferences and collaborate across departments, fostering broader pedagogical discussions. 

4. Looking back, what would you do differently? 

Initially, I introduced AI only during seminars, but I now realise the value of pre-class integration. Allowing students to explore AI before sessions would have deepened engagement. I would also have focused earlier on student learning through AI, rather than policing its use. Designing prompts that encourage reflection, and reasoning has proven more effective than simply controlling access. 

5. Three practical tips for fellow academics: 

  1. Build your own AI literacy and confidence: Experiment with tools to understand their capabilities and limitations. Confidence in using AI translates into better student experiences. 
  1. Shift from content delivery to challenge design: Create tasks where AI supports but does not replace human judgment. Clearly define acceptable uses and disclosure requirements. 
  1. Use AI to deepen reflection, not replace it: Incorporate reflective activities where students critique their own reasoning and AI’s output. This fosters metacognition and critical engagement. 
Posted in Case Studies

Episode 12: Talking to Students about Generative AI

Photograph of our two speakers, Carol and Tom, of our producer Simon, and of our presenters Heather and Wendy.
Our guests, Prof Thomas Ormerod and Prof Carol Alexander, our producer Simon Overton, and our presenters Dr Heather Taylor and Prof Wendy Garnham.

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our twelfth episode is ‘talking to students about AI’ and we hear from Professor Thomas Ormerod and Professor Carol Alexander.

Recording

Listen to the recording of Episode 12 on Spotify

Transcript

Wendy Garnham & Heather Taylor:  

Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Talking to Students about Generative AI. And our guests are Professor Carol Alexander, Professor of Finance, and Professor Thomas Ormerod, Professor of Psychology. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome everyone. 

Heather Taylor:  

Carol, how are you currently talking to students about generative AI, and what kinds of support or guidance do you provide to help them use it responsibly? 

Carol Alexander:  

Well, I teach two modules this term, one graduate and one 3rd year undergraduate. And the 3rd year undergraduate, using generative AI is one of the learning outcomes. I teach them how to use ChatGPT or Claude. I prefer not to use Claude. It’s not great for finance. For finance ChatGPT is clearly the best. I teach them ChatGPT in workshops, at least the chap who’s doing the workshops does. We design the workshops so that, at the beginning, they understand the importance of context engineering. It’s setting your project instructions well, at least setting your basic personalisation, and then project instructions, keeping things tidy with projects, managing, so called memory. It’s not actually memory because every time you send a prompt, the entire previous conversation is attached to it. So realising that stuff early in the conversation can get lost and the importance of recapping that, telling it to forget certain things so that this context window, which has a certain size, it has a certain number of tokens – tokens here means bits of words and so forth. You might put in a very inefficient prompt, which is a few words long, and then it responds with 10 pages because it doesn’t really know what you want. And that is what you don’t want to do. You want to engineer each prompt you use to get an efficient output, not just efficient from the point of view of the students’ learning, but also from the point of view of energy consumption as well. We go through all that, and then in the context of the financial risk management module, they do their workshops. I mean, not always using ChatGPT, but there are specialist GPTs, there are XLAI, and various other things. I’ve built a GPT called a Socratic learning scaffold, which helps them to learn anything new. And then I just set the project, which is 70% of the assessment for this. You know, what used to take days now takes hours because I use my Socratic learning scaffold and it doesn’t have to be Socratic, but that’s just, you know, getting to understand a bit of philosophy. Anyway, so they must think out of the box whereas in the past, I would not dare to set a project exercise that they hadn’t already experienced something very similar of in workshops or lectures. But now I’m setting them something that they’ve not actually done before. It’s associated, but it’s a little bit more advanced than we’ve done because they can teach themselves to do that, and that’s the first step of this project. 

Wendy Garnham:  

Sounds as though it’s like fostering creativity. And the idea of sort of getting them to take what they’ve learned before, but to build on it and do new things with some of that learning.  

Carol Alexander:  

Well, I mean, the project starts off with a paragraph, are you interview ready? Because they’re third years. And the job market out there, of course, we can get on to talking about that later if you like, but, I mean, it’s awful for white collar jobs and particularly, you know, finance in the City. Graduate roles are being replaced with AI. One person can do the output of five now. But, if they are competent with playing this piano, not just having the piano, but making a very nice melody from the piano and knowing how to tune it for whatever they’re going to play, they stand a much better chance of getting. And this project itself, I hope, will be attached to their CV, and in some way, they can, they can apply for jobs. 

Heather Taylor:  

Yeah. Amazing. Do they do they like doing it? Have you had good feedback? 

Carol Alexander:  

Because I record all the lectures for my sins. Although, I’m going to upload them to update my YouTube channel because that’s very out of date. I did that in COVID. So it’s now, like, five years out of date. But I did that for the same module five years ago. And so now I do for every lecture, two-hour lecture, there’s six videos prerecorded. And I go in there after Tom on a Monday morning, we share the same lecture room. And I just stand there and say, okay. You know, I put up a Word Cloud on PollEv and say, what do you want me to talk about? And everything is totally interactive. And I may look at lecture notes, so I very often just scribble. And it’s like the old-fashioned way of writing on the blackboard is back, although I do it on the overhead projector. And I’m gauging the engagement this year, and I haven’t seen anything like this for years now, particularly with Gen Z. You know? But there’s eye to eye contact. Very few people are looking at the phones because it’s interactive, and they’re getting what they want. They’re asking the questions. 

Wendy Garnham:  

Music to my ears as an active learning fan.  

Heather Taylor (05:54): 

So, Tom, same question to you. How are you currently talking to students about generative AI, and what kinds of support or guidance do you provide to help them use it responsibly? 

Thomas Ormerod:  

So, we’re in a very different place today. A very different place indeed. I would say to many of my colleagues, and I apologise if I’m putting speaking out of place, it’s still a threat, not an opportunity. I see it as a huge opportunity, and I’m just trying to work with the students so that they feel that too. So, I teach the final year module as well, just before Carol comes on with hers. And I am in a position where I’m talking to our students about the positives of using generative AI. And I say to them, it would be irresponsible of us to let you leave university without skilling you to some extent in the use of what must be the most powerful intellectual tool since the invention of the computer.  

As a school, though, I think we are incredibly slow to pick up on this. We don’t teach them how to use generative AI. I teach them what not to do. I teach them that it’s really good for formatting. It’s really good for all those boring things like, you know, getting the right APA formats, which we seem obsessed about in in psychology. It’s really good for data cleaning, data processing, data handling generally. It’s really good for putting in an essay and saying tell me what you think about this. It’s really good for the things like ‘I’ve written something, can you bullet point it so I can see my own structure.’ It’s really good for having a conversation with, and that’s where we need to teach them how to have a conversation. And the one thing I say that you don’t want to do is ask it to generate your content for you, because we’re very much still essay based, text based, produce a reasoned argument about. I kind of am setting ground rules in my teaching for saying you really should use generative AI. These are the good things to do, and here’s something you just don’t do. What I don’t do is I say, because this is academic malpractice. I mean, it is if you look at the regulations, but frankly, we’re not going to be able to tell or at least prove that people have written generated material using generative AI. We can often tell, but we can’t prove it. But what I say to my students is two things. First, if you use generative AI to generate your essays, you’ll probably get a 2:2 because it will be alright, but it won’t be great. More importantly, when you are a forensic psychologist sitting in a prison on your first day under supervision, you aren’t allowed a computer in a prison. You must know the stuff. You must think about the stuff. That’s the moment you’ll wish you’d use generative AI in the right way. One of the things I’ve started to do is put in assessments that, encourage them to think intellectually about the material.  

I have this thing where I get them to write a monograph chapter, where they get four papers and they have to write a chapter that would go in a monograph, that, integrates across those four themes. And then the one thing I put into this, you see, I say you must have an illustrative figure. You must have some picture that shows the point you’re trying to make about those four papers. ChatGPT at its best is bad at that. It’s bad at doing that, and the students really struggle with it as well because it’s one of the first times they’ve been challenged to be truly creative intellectually, in an essay format. I think we’re finding ways where we can use generative AI positively, that free us from the traditional essay format. And I think that’s my takeaway message to myself is rethink the assessment to fit the new tools we have. 

Heather Taylor:  

Yeah. I love that. I love the idea of the assessment. The point you made about, don’t get them to write, you know, encouraging them, not to use it to generate their content. And I think it’s possible they could get a 2:2, you know, but it depends on the topic, I guess. I sometimes ask AI when I’m bored in the morning, and I’m the like, no-one else gets up when I get up, so it’s just me in the lonely world at 5am and I ask it questions about things I know a lot about, and it gives me some nonsense back, like some real nonsense that would go against anything I would teach my students. 

Thomas Ormerod:  

Yes. I have a lovely example of this. You may have come across this example before, but I don’t know if it’s still true. But it certainly was true a couple of months ago. If you type in to ChatGPT, draw me a beautiful room, a picture of beautiful room without an elephant in it, it draws you a picture of a room with an elephant in it. And it does that because it’s being clever. It knows the expression elephant in the room, so it thinks, you know, the elephant should be in the room because of it, etc. Why would you say do it without an elephant in it? If you say draw a room without a pig in it, there’s no pig in it. So that’s the way you can test that. Now I use this as a teaching aid with my students because I say to them, you can all see that that’s quite funny. Right? And it’s that it’s not doing what you asked. What if you didn’t know what an elephant was? When you generate content, you don’t know that content. So how do you know it’s right? How do you know it’s not put in material that’s incorrect? How do you know it hasn’t put an elephant in your essay? I don’t know how powerful that is as an illustration, but they all go ooh. 

Heather Taylor (11:32):  

You’re right. And I’ve asked it to do a reconfiguration floor plan in my house before where I moved the stairs so I could fit a bathroom in upstairs, and I told it all of this. I could probably get some tips from you on how to more efficiently talk to AI, to be fair. But it gave me a floor plan. It was 3D, which isn’t what I wanted. But, anyway, the stairs were there, the bathroom was there, but the bathroom was downstairs, and the stairs ended in the bath. Yeah. And I was like, I won’t try this. 

Thomas Ormerod:  

And did you do it?  

Heather Taylor: 

Exactly. It was great. Yeah. But, yeah, it was just like, it just decided how can I fit in what she’s asking me for into this tiny space, which is my house, and it did do it, but in a completely unliveable way because it doesn’t live in a house, does it? It doesn’t know. So yeah. 

Wendy Garnham: 

But it does it with papers as well because I’ve done it with the students where we show them, like, a paper. We ask it to summarise a paper, and it tells you completely the wrong information. It pulls together ideas from other papers and it created this story. 

Carol Alexander:  

Yes. But you can control that, this hallucination. You can set up a project where the domain that it looks at is only the lecture notes. And then you can, do your prompt more efficiently. You can use the code words step by step, for example. And, also, my personalisation, which are the base level instructions, are always ask a clarifying question at the beginning of a conversation. 

Wendy Garnham:  

Yeah. I think without that input though, if students are just, certainly in foundation year, if students think we’ll just go to ChatGPT they’re putting in the essay question or, you know, the blog post titles, whatever, and seeing what comes up. And they do tend to just think that’s a useful summary or, you know, that’s really sort of saved me ploughing through. We use that as a sort of starting point to say don’t be tempted just to go away and just, like, put this question into ChatGPT because you cannot guarantee that what comes out is going to be useful, helpful, whether it’s going to be accurate. So that’s sort of how we start that whole story. But then, of course, we do our active essay writing, which we developed with Tom, where we sort of get them to start with their own ideas first. So just, you know, what occurs to you when you see this as a title? You know, what are the ideas you’ve got? What questions do you have about it? You know, so try and have confidence in your own ideas first before you turn to anything else. 

Carol Alexander:  

And I was just thinking, you know, we’re in a transition period. Soon, the students coming into university will be better than we are at prompt engineering. You know, it’s as I said, it’s like a piano or like a car. Just because you got one doesn’t mean, say, you’re not going to drive it and have a crash. But, this young generation are going to be growing up with it, and they’re going to be absolute masters at playing it like they are at video games. So only over the next few years have we got to educate at university level. By that time, we just gotta cope with the few students that actually want to come to university at that point being better at using ChatGPT than us. What value have we got to offer? Because you can teach yourself so quickly with ChatGPT or any LLM. Particularly Chat GPT is so much better than the others. 

Wendy Garnham (15:06):  

That brings us nicely to our next question. I’m going to ask this to you, Tom, first. In your discipline, where do you see generative AI offering the greatest value and what are the key risks or limitations? 

Thomas Ormerod:  

I think it could, if done correctly, be quite democratising because now there’s been in the last 10 to 15 years a huge change in the skills requirement for the behavioural sciences. In my day you just went to the pub recruited a few people to do little puzzles, you wrote down what they said and then you published it. It was easy. These days with the Open Science Movement which has basically responded to the fact that so much of psychological research just doesn’t replicate because it was done by people like me. We have a much stricter approach to methodology and so there’s a whole, there’s reams of stuff around pre-registration, there’s a use of the R environment for constructing your analyses. There’s a whole new range of statistical techniques. There’s a whole new range of qualitative techniques. I mean wow if you open up the mind of thematic analysis you find there’s a new thematic analysis method for every researcher in the world as far as I can tell.  

And it’s actually meaning that a small subset of researchers are gaining access to journals, gaining access to grants because we can’t reskill ourselves quick enough. So one of the promises I think of generative AI is it can take an awful lot of the legwork out of that and allow us to do the interesting hard thinking that we want to do properly supported by tools that say, hey, you don’t actually have to learn all these funny little commands in R that you always get wrong because you don’t put a full stop or a capital in the right place. We’ll do that. You think about great things about how the mind works. That’s my fantasy for what we should be doing in psychology as a discipline, is saying, let’s use generative AI to help us think deeper thoughts. And let’s use generative AI to help us tackle questions like how do you do belief revision? Now we’re so obsessed in the methodology of our experiments. We don’t spend a lot of time thinking about how people change their minds. And that’s the level we ought to be working at as scientists. But instead, we’re working at the level of how can I get my code to work? 

Carol Alexander:  

Yeah. Well, coding is a big, plus for, in finance, in the you know, if we’re training students to go and work in the industry, the job of a developer, you know, writing Python code and things like that is more or less, non-existent now because LLMs write perfect Python code. You just need to know how to ask the right questions to get the right code out. But I think the advantages in finance is, you have to distinguish between the advantages for educational purposes, and the advantages for research. And on the side of advantages for research, you know, it can mathematical proofs, so basically what we do is we get mathematical models of financial markets or financial institutions, and we get data, and then we apply the mathematical model to the data, and then we see whether our hypotheses about where things should happen are true, with some statistical analysis on the data that is and the models that we do the statistical tests on are derived from our basic formulation. And, sometimes it’s quite difficult to get from that basic formulation to a form of the model that you can actually test. But now, I mean, it’ll even write the basic formulation for me. So, you know, deriving an economic model of behaviour in financial markets is easy. Anyway, so that’s on the research side. But on the student side, it’s still the same because, of course, we’re educating them to do what we do in the profession, what we do in research, what we do in the profession, which is build mathematical models with some data.  

So, you know, as I said, there were these specialist GPTs, generative pretrained transformers, a particular type of LLM. There’s a whole library of these in, the OpenAI product. There are also libraries of similar sort of things in Claude or Perplexity and things like that, but they’re much, much more advanced. They these things have been produced for years, and there’s now, I don’t know, 20, 120, 200,000 of them. As I said, I made my own, so if I can do it, I’m sure loads of other people are. The most popular one is generating your own horoscope. And the second one is semantic scholar. It’s sort of helping you write essays and things like that. But there’s a lot of Excel AI, for example. You upload some data. You just – I don’t even write now because the dictation is so good on ChatGPT. All my ums and ahs, and ‘hang on. I got that wrong’. And I just have a stream of consciousness, which then, even before it’s translated and I press send into the prompt, all the ums and ahs and the corrections I made magically disappear from the translation. It’s extraordinary. I use that for writing emails. I just copy and paste it and put it in a prompt. Anyway, so I just have a bit of a mind dump and then put it into one of these Excel AIs. And at first, they use Python, then they just put it in an Excel spreadsheet. But then now, they click to view the formula in Excel. 

Thomas Ormerod (20:49):  

Yes. I did a similar thing just yesterday morning. I was prompted by Claude, it said you’re doing this inefficiently why don’t I build you an app? And it built me an app to do it. I had to process 40 different spreadsheets. It just said I could do this one by one for you as you seem to be thinking, but why don’t I build you an app? And then it took the task down from a day to half an hour. 

Carol Alexander:  

The thing about Claude is it just creates these artifacts all over the place, and you can’t relabel your chats, and you can’t delete them. And what you have to do is put them all in a project and delete the whole project if you want to tidy up some of those chats. But you can’t relabel them. When you’re looking back, you’ve got no proper index to know what you’ve done in the past. But then, for me, it was just hundreds of artifacts, most of which were useless, were being produced. This is one example of an artifact that was useful. But had you gone direct to the artifact build rather than let it do it in a chat, then you could have had a little bit more control over that article. You could have named it. 

Thomas Ormerod:  

I shall do that in future.  

Wendy Garnham:  

So that seems to relate more to sort of the value. How about the risks or the limitations? I’ll ask Carol first. What do you see as being the risks or limitations? 

Carol Alexander:  

Well, I mean, the singularity. You know? Artificial general intelligence, where the point where they get more intelligence than us in, whatever use, more powerful than us. I mean, I was just watching this podcast by what’s his name? I’ll just find his name – the diary of a CEO? He’s just done one with Roman Yampolskiy, I think. And he wrote lots of books. He’s completely mad as a hatter, I have to say. He believes, 100% that we are living in a simulation run by superior agents, you know. He sort of takes this idea of simulations to the extreme. And I mean, it’s true that there are some wonderful simulations. Apparently, Google 3, Google Earth 3, you can put yourself in the Grand Canyon and steer yourself or up the Amazon River. Steer your own route. And you can just imagine, when that becomes a 2-player game, you can be steering yourself up your Amazon river, and then your friend decides to put a Zulu tribe there or whatever. Not a Zulu, but you know what I mean. So you can sort of if you take that to the extreme, then yeah this simulation theory has some credibility. But no. I mean, I think what’s going to be happening is just as we become cyborgs more and more, right, I’m attached to my phone, and I do worry I lose it. I would much rather have it embedded in my arm, thank you very much. And other stuff like ChatGPT. I wouldn’t mind having that, as a sort of earpiece. You know, the Google glasses as well. We’re getting more and more like cyborgs. And then on the bioengineering side, you can imagine that, you know, it’s progress so rapid that human DNA or some brain cells could be put into a humanized robot. They’re getting quite agile now, although the Tesla one’s completely empty. But in Japan, you know, they’re getting some good robots. And so, you know, what we’re looking at is very high unemployment apart from plumbers, but maybe these robots could even be plumbers. But, you know, we’ve got a huge demand for construction, particularly if we’re going to go to war, usual thing that’s you know the economy rebounds because of building things for war.  

And we’ve been struggling since 2008 because of the bankers, and the bankers aren’t going away, although that’s another story. Anyway so, yeah, there’s quite a lot of risk. I wouldn’t take it to that extreme, but I do see large unemployment for a while as jobs change. So white collar jobs are going to be few and far between. Certain jobs like hairdressers. You know, plumbers cost so much and builders in Brighton are awful. You know, I mean, we should start offering proper apprentice degrees in some universities, not all our degrees, but some universities like Sussex, I think, would be ideal for that, and particularly since Brighton University used to be one of these wonderful technical colleges until Thatcher shut them all down and called them universities. We need to go back to that world where we are properly training, construction people, and then it wouldn’t cost an arm and a leg because there’d be a greater supply. 

Thomas Ormerod (25:58):  

I think Carol’s right to point to potentially dystopian future. If you think about it from an educational context, I think the big takeaway message there is fear. I think that because we’re learning the implications of the new technology, there’s likely to be levels of fear, amongst faculty and students around what what’s going to happen. What will my degree mean? How am I going to assess people anymore if I use essays that they can write on the computers? And I think that will be a transitionary phase, we hope. But I think we have to recognise that because I think that the anxiety that it’ll raise will be quite considerable.  

And I think that’s why this is a useful session, I think, because I think that the takeaway message from both Carol and I would be that there is more to be discovered that is useful than we should be worrying about. Yes, we should worry, but we should worry positively. Okay. If we’re going to change the world of work, what are we going to change it to? Carol’s example is let’s look at the construction trade, for example. When I was doing AI research back in the 1970s, I was only about 5 at the time. That’s not true. The big worry was that robots would take over all the blue-collar jobs that, you know, because we could have a vacuum cleaner that could sweep a house, that you could have a vacuum cleaner that could sweep the roads. You could do all the blue-collar jobs, exactly the opposite has happened. And it’s the white-collar jobs that are under threat. We must reconceive the world of work. We must think, what do we get people to do? In the same way that when the typing pool disappeared with the word processor, we had to reconceive. Well, what does what does a PA do now? Apart from, you know, buy the chairman’s birthday present for them. That we had to rethink what those roles are about. And I think that if universities have a place, it’s to do that job. We need to rethink the future. And as part of our curriculum, we need to be sending our students away realising that that’s their job is to rethink the futures of the trades they would be going into, not just come up with a bunch of skills that allow them to do what was good in 1973. 

Wendy Garnham:  

I think that also changes that idea of anxiety about ChatGPT and AI and where it’s heading, because I think you have to have a level of anxiety in order to be able to think about how it’s changing and how you adapt and so on. I think for me the danger is if we don’t have that anxiety and we just ignore it or park it and we think that we can just carry on as we’ve always done because that’s just how we do things, I think you know, sometimes having that level of anxiety means actually we are thinking ahead and we are planning ahead. I think that can be seen. 

Carol Alexander:  

It’s the right level though. You don’t want to do much. 

Wendy Garnham:  

But you need some. Otherwise, you’re just stuck in a rut of doing things the way you’ve always done, where the advances are happening alongside and the two never connect. And I think, you know, that for me is sort of a danger that you have, you know, this sort of disconnect between – I know it’s happening but I’m not going to change anything I’m doing – whereas I think that level of anxiety sometimes can be a push towards thinking ahead. 

Heather Taylor: 

I was thinking, well, about what both of you were saying, but something you said earlier, Tom, about researchers. If we didn’t have to spend ages figuring out our code or whatever it happened to be, so the researchers could do the more interesting thinking. And I think when I’ve spoken to people before about ChatGPT, I’ve said we should be teaching students to do the things that it can’t do. But really, I think of that as sort of philosophical questions that have got nuance. I don’t really think ChatGPT thinks. I think it tells me stuff. I don’t really view it as thinking in the way a human thinks. 

Thomas Ormerod:  

Think that what Carol does is a good example of what we should be doing, which is not expecting ChatGPT to think, but letting ChatGPT help us to think. I think when I’ve used it successfully, most successfully, I’ve had a dialogue when we always seem to get kind of ratty with each other sometimes. And we can’t help but anthropomorphise when you use it. If you get into that dialogue state, and where you’re asking it to critique you, then you can get some really positive thoughts coming out of it, I think. 

Carol Alexander (30:52):  

I’m very interested in my emotional reactions to it because I iterate all the time. You know, I start off with a prompt and it gives something, and then I look at it, and then I read what I want to change, or what I don’t like, and I dictate and press play, and then it goes on. And then if it doesn’t succeed after about three iterations to progress, I get cross with it. And I say you’re rubbish. I don’t like 5. Give me 4.1. You know? Thank goodness I’ve still got access to 4.1. And so I do choose a model. But then there are other times, and I have to say that 5, ChatGPT 5 does whir too much. You know, it’s slow. It thinks too much, even the instant and the thinking mini. But, anyway, I do like 4.0. I like 4.1. It’s more of a workhorse, and I praise it. I say, ‘oh, gosh. I love you, 4.0. You’re so good. Do you know ChatGPT 5 is awful? I wish they were all like you.’ 

Do you know that over 60% of the prompts to LLMs worldwide are on the human relationship side, i.e. asking for companionship, just basic chat, or therapy? Over 60%. I got that figure a few weeks ago from one of the, you know, one of my Google feeds, which, of course, only feeds me what I want. 

Heather Taylor:  

That’s quite a concern. But also because sometimes I’ll ask it, so I know loads about obsessive compulsive disorder, and that’s what I’ll quiz ChatGPT on, to see how good it’s getting and if it’s learning from me. You know? And it used to be rubbish. It’s a bit better now. But my concern with people using it as therapy is that there are specific models that are used to treat specific disorders, even specific sub-symptoms within disorders or themes within disorders. And using a different therapy for a slightly different disorder or whatever it might happen to be could cause a lot of problems. And it concerns me when people use ChatGPT for therapeutic reasons because there’s advice even with self help books to, if you need extra support make sure you reach out for it. You stop doing it. Uncomfortable feelings can come up. ChatGPT could just be talking pure nonsense to someone that could be, you know, making their problems worse. 

Carol Alexander:  

There was a big law case in OpenAI. A young American chap killed himself. And then around the time of the GPT 5 update, they put a patch on so that now it’s so much more conservative. So, you know, it’s a bit like HR – HR finds something goes wrong, and then they issue all these rules that you’ve got, you know, adhered to in your behaviour in the company. But then there’s going to be some other things that somebody does, outside those rules. So they haven’t changed the basic way that ChatGPT operates. But the good thing about Claude and the reason I got a subscription for a month, which I’d used for about a week, and that’s because Claude is different. It’s not a GPT. It’s an LLM that’s built on a code of ethics, sort of governance principles. It’s almost like a constitution that ‘thou shalt not do anything that would help people build chemical weapons’, for example, were a recent change to its constitution. I thought it was a better way of building an LLM and less prone to having to have these bits of string and elastoplasts to keep it functioning. But then there were all this other stuff that I then cancelled. 

Thomas Ormerod:  

I’m just terrified about the idea of generative AI becoming like HR. It’s not the way I want to see it going.  

Simon Overton (34:53): 

I’ve got a slightly more positive take on AI and it relates, if I may, to the work of art in the age of mechanical reproduction, which was, I think, an essay by Benjamin? Walter Benjamin. Anyway, the point of that was everybody was worrying that with mechanical reproduction, that works of art like Van Gogh’s Sunflowers would become devalued because everybody could get a copy of it and stick it on their, you know, bedroom wall. But the opposite happened. It didn’t decrease the value of the Sunflowers. It increased it. And I feel that a similar thing is happening. And I would say that a slightly more up to date example would be, the production of music and how these days people really value, for example, the experience of having their music on vinyl. And when you look at, I do a lot of YouTube, but when you look at that and you have music producers and all sorts of other content creators on YouTube, they’re very keen to state and to say that what they do does not use AI. I wonder, and I hope that maybe it’s, rather than devaluing the things that we most like about being human or the things that we value, including perhaps a more traditional form of university education, I wonder whether it would increase the value of it and people would appreciate it more. And then that perhaps is the direction that universities need to go in, to think people could do a Matrix and download the content into their head, like Keanu Reeves. But what can we actually do? What can we provide that is so much more human and so much nicer than that? And I think and I hope that is the direction. I wonder if you two have any thoughts about that. 

Thomas Ormerod:  

I’ll start. I think that is an optimistic view, and I think there’s some truth in it. I think the problem that you face is the volume issue. So, it’s a bit like the finest jewellery is very, very, very expensive, very exclusive, and you know it’s the finest because of the hundreds of hours that have gone into it. You go to Ratners and you’re getting a very different product. One of the implications of what you’re saying is that a lot of the volume work, the soundtracks to adverts, that kind of thing, is going to be done by AI because why wouldn’t you? Because it doesn’t matter whether what the ownership of the creative act is there. Whereas your Taylor Swift music is still going to be written by humans because Taylor Swift is a quality product, can charge premium fees. It’s almost in a sense the opposite of the democratising of it. That you then end up with a situation where an elite can afford human generated materials, but the rest of us basically gotta go to Ratners. 

Carol Alexander:  

Yeah. I mean, imagine people might prefer to have human accountants they have a relationship with or human personal trainers or human therapists, but, what LLMs are beginning to offer is that. For people that could not afford that sort of thing, at least a basic level of accounting or people in prison that don’t get any therapy at all, at least, you know they may get some help, or people on the streets.  

Thomas Ormerod: 

On the positive side to come back to this though, there are interesting and curious opportunities. In fact I just bought the Internet address for ain’t.com – AI, no thanks. Because I was talking to people who were working in recruitment and they were saying one of the problems they’re getting is that companies want an AI-free product to differentiate themselves from the fact that so much CV processing, etc, is done by AI systems, which we know introduce huge biases, ethnic, racial, gender biases, because they’re taking all their content from the world outside and the world is biased, they’re reflecting those biases in what they do. Being able to signal that you are an AI-free product is a bit like sort of being able to say, you know, gluten free or ultra-processed free or whatever. I think there will be a market, but it’s a niche market. And I think that what we have to face, is the volume issue that AI does it better at volume. 

Carol Alexander:  

And recruiters are in a terrible situation now because all CVs are perfect. They’re all written by ChatGPT, and they all look the same. Some of them even use secret words, and they, even though they didn’t go to Oxford or Cambridge, they put in sort of white type Oxford In the footer and then they get through the algorithms that way. 

Heather Taylor (39:56):  

I think, based on what Simon was saying, I do get your point, You know, you could have a smaller quantity and a higher value product, and only a few people could afford it. But I feel like going back to what you said earlier, Carol, about how you will upload these videos and then you go into your lecture and go, what do you want me to talk about? And then they get to decide, and you said that improves engagement and so on. And I can see that it would. Okay? And that’s also very confident and brave of you to do it that way, and it’s great. To just be able to go in and go I’ll talk about whatever you like. I’d love to be able to do that, but I just waffle like I’m doing now. But I think maybe that’s the answer. I think maybe Simon’s point is almost that, we do need to know how to use AI, and I mean, like, we. I know you two know. Right? I don’t really know. I just argue with it a bit. 

Carol Alexander:  

You should watch my YouTube channel. It’s Professor Carol Alexander, and the playlist, because there’s some other ones there, is called a No Coders Guide to Surviving Generative AI. 

Heather Taylor:  

Okay. I am going to look. I think we need to know it so that we can know what it can do, so that we know what to offer students in terms of learning how to do this themselves. But also once they know how to do it better than us, like you’re saying they will, we need to know what are the things that are going to be valuable or they’re thinking that’s valuable for them that AI can support, but it can’t replace. And I think it’s almost like you could have, at volume, it’s trickier, but at volume, more handmade, more bespoke degrees so that if we didn’t have to teach like, Jennifer Mankin will kill me for this, but she’s in our department, but if we didn’t have to teach R, okay, if we didn’t have to have whole modules fill up with teaching them how to code, and if we didn’t have to teach – if I didn’t have to teach how they’re going to structure their research report, right, and I could just focus on their thoughts, their nuanced thoughts, their individual thoughts about things that they’re writing about in there, or I could get them to come up with a new research idea or whatever it happened to be. If we could move away from these things because AI took care of it for us, I feel like it would be a more engaging degree for them that could still be taught en masse. Okay? You know, you’d have to have people that are quite specialist in certain areas to be able to do that, like you would be with finance, you know, to be able to walk in and go ‘what do you want me to say?’ 

Carol Alexander (42:39):  

How are you going to fill up an entire degree, and do you want to? If the jobs are not there, we’re not going to get students applying 

Heather Taylor: 

I didn’t think about it from that point. If the jobs aren’t available, the students aren’t going to come to uni for that reason.  

Carol Alexander: 

But if we if we append an apprenticeship where, it may be an electrician or whatever, you know, a plumber, as I said before. They can get jobs in that, but they want still to come to university as a transition between home and the real world, where they expand their mind. They expand their understanding.  

When I was Sussex as an undergraduate, I was a science student. I did maths with experimental psychology, and we all had to take an optional art subject. I took witchcraft. But the point is that we you know, so some students that are in engineering, for example, and they may be, you know, civil engineering, they may still have jobs there, but a lot of their degree would be taken up by them just expanding their mind and deciding, okay, I want to take a module in blockchains and crypto assets, or something like that.  

Thomas Ormerod: 

That’s interesting that the white paper that came out the day before yesterday, the government is moving us towards this kind of cafeteria style of learning, which has strengths and weaknesses. It or at least it has threats. It has huge threats to the model we have now, but it also has opportunities for people being able to equip themselves with the skills that will get them jobs. 

I think even at a finer grain than that, one of the challenges for us in academia is to work out what skills we want people to have. In psychology, the principal skill that we’ve thought people should be able to have is to take large amounts of information, crystallise it, pull out a point, critique it, and then design something on the basis of that. Write stuff, to put it simply. Nowadays you could argue the skill we should be sending our students away with at least one of the skills is editing. That, yes, you’re going to get the stuff out from ChatGPT. Your job is to edit that, to do the job you want it to do because it won’t be perfect. It won’t be accurate. It won’t be complete. We should have a course on editing.  

Wendy Garnham:  

I’m just thinking. I went to a conference in the summer where they were talking about how they saw the future of generative AI and one of their arguments was that eventually it will collapse in on itself because you’ll be entering information into it to feed that algorithm that it’s already given you, so it just it becomes less of a useful tool. But I’m just interested to hear your views, both of you, on that. 

Thomas Ormerod:  

There is a fix for that, which is that you would have ChatGPT 6, that is self-reflective. I already know that. I’ve seen how this gets used. In a sense, putting confidence levels on its information, and using those as part of its updating, not just updating on the basis that here’s another mention. I think that is a genuine problem with our current understanding of the technologies. But once it becomes a problem, I think it could be fixed. Don’t you? 

Wendy Garnham:  

Looking five to ten years ahead, what psychological or behavioural changes might we expect from the widespread use of generative AI in everyday life? Tom. 

Thomas Ormerod:  

Well, optimistically, we will see people feeling more empowered because they know that they have a resource that can solve a lot of their everyday problems and that they can do bigger things, better things, that the technologies will have evolved in a way that makes them much more interactive in ways that the technology can start helping you to construct your ideas, construct your arguments, deal with your problems. That’s the optimistic end. The less optimistic side will be behaviourally that we begin to shut down because we lose confidence that we have anything to offer. And we do less because other things can do more than we think we can do.  

I’ve just finished a project funded by the Educational Enhancement initiative in the University which wasn’t looking at generative AI, it was looking at plagiarism and what it is that encourages people to plagiarise. And we did an intervention where we gave people essays to mark, half of which had been plagiarised, half of which hadn’t. And we found quite quickly that people gave the plagiarised ones better marks because they looked better. The text was better. And then we showed them what we would mark them as, and how the plagiarised ones did really badly, and there was an immediate effect of, but we measured the Turnitin scores, and they went from like an average of 15% across the cohort to 2%. And I think that there is this issue that people will lose confidence in their own ability to generate good material. And as academics, we must counter that. And we say, do you know what? They put an elephant in the room when you don’t want one. You’re writing, even if it’s colloquial, even if it’s full of spelling mistakes or a few naughty words, we like that better. We prefer you to do that and think for yourself and show that you thought for yourself than you get your grammar right, like what not what I’m doing now. 

Carol Alexander: 

I mean, I just think that it’s going to act as a sort of magnifier of inequality. There’ll be those that don’t even use it at all, and there’ll be those that just give up because they use it but they don’t feel that they can add anything by using it. And then the few, the elite who managed to increase their productivity so that what used to be done in a day is now done in an hour, are the ones that are going to thrive in every way. So, yeah, inequality is just getting worse. 

Heather Taylor: 

And if I watch your YouTube video, I can become one of those elite. 

Carol Alexander:  

That’s the idea. 

Thomas Ormerod: 

The challenge there that I’ll set you, once you’ve watched that and given that you’re educators, is if you’ve been in prison for fifteen years, you don’t really even know what Microsoft Office is. You don’t know what email is. You don’t know what the Internet is. How will we use generative AI to help people rehabilitate themselves into society when they’ve got that huge gulf in their knowledge? I say this because I’m on the parole board and I see these people coming up once a week when I’m in a parole panel. And they’re fascinated by our use of computers. We say to them, we’ll be looking at two screens. And one of them says to me, what’s a screen? What’s a keyboard? You know, so these people have been in prison for, like, years and they’ve gone in from the age of sixteen. And they’re, like, forty-six, thirty years in prison, and they don’t know anything about this. It’d be really interesting as a kind of difficult problem to solve. How can you use generative AI to get somebody back into the community After being out of it for so long? Don’t know the answer, throw it out there. 

Heather Taylor: 

It’s a great question.  

Wendy Garnham: 

Yeah. How do you start with that one?  

Heather Taylor: 

I would like to thank our guests, Carol and Tom. 

Carol Alexander: 

Thank you very much. 

Heather Taylor: 

And thanks for listening. Goodbye.  

This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes as well as articles, blogs, case studies, and infographics, please visit: Podcast | Learning Matters 

Tagged with:
Posted in Podcast, Uncategorised

The Scholarship Journey: Cultivating Scholarship Through Growth and Connection during Scholarship Leave

A woman (Jo Tregenza) with shoulder-length brown hair wearing a black-and-white checkered shirt, standing in front of a light wooden panel background
A woman (Jo Tregenza) with shoulder-length brown hair wearing a black-and-white checkered shirt, standing in front of a light wooden panel background

Jo Tregenza is a Reader in Primary Education at the University of Sussex, and Senior Leader with a demonstrated history of working in the higher education industry. Skilled in, Primary Education, Teaching English, E-Learning, Lecturing, Teaching and Higher Education. Senior Fellow of the Higher Education Association. Founding Fellow of the Chartered College of Teachers. President of the United Kingdom Literacy Association (UKLA).  She received several teaching awards, including 2018 Innovative Teaching Award for the whole ITE team, in 2015: USSU Teaching Award 2016 and Outstanding and Innovative Postgraduate teaching award.

What inspired you to frame scholarship as a journey and use the growth metaphor throughout the presentation?

So I just can’t get away from using plants and nature. I think partly because something that’s really in my mind is the organic process. So it’s not never terribly strategic about it. It sort of grows and develops. I first became a teacher because I wanted to teach children how to see daffodils and it’s that sort of theme has threaded through.

So then when I started teaching students, I gave them a seed or a bulb every year, and I’ve planted one for every one of the students I’ve ever trained. I’ve got a lot of bulbs in the garden now – it’s very pretty in spring.

And this just keeps coming back. The model I’m coming to with my PhD is of teaching reading, which is coming from a tree approach where it starts with the roots and soil and grows and so I just think it’s just how my brain works.

And I want it to see that it’s a cycle almost. It keeps going, there’s that cyclical approach to it, but its also the nourishing, I think.  The soil, the sun, the water, everything that comes in to nourish it.

How did you decide which personal experiences to include and what impact do you hope sharing them will have on your audience?

I was trying to aim at the people that I thought maybe were on scholarship and thought they would never ever get promotion or anything. And so showing that you can, because I mean, as I say, I’ve got to this stage without a PhD. Hopefully, I’ll have one soon, but, I wanted to make people feel that there was a root within the university. It might take a long time and in my case it has been a very long time but I think the things that have happened more recently are making quite big substantial differences.

So you can see this sort of slow journey in what I put, and the difference, what the key thing was is putting the policy that we had access to study leave, and that’s a massive change for us. But I’m well aware it’s not going to be consistent across the university and given we’re making faculties, the hope that we could have that sort of thing consistently for everybody would be really good. So it was like a key message.

What impact has this had on student learning, curriculum design, or academic practice more broadly?

So it turned out that we were changing all our modules anyway. My English was merging with maths. So I wasn’t going to rewrite everything because I thought, oh, I’ll just merge it with maths, that’d be fine, but because of the work I’d done and the research I’ve been looking at in reading particularly and the connections I made I ended up completely rewriting everything which I’m regretting today because it’s an awful lot of work to do.

I think I was always aware of what was going on outside, but now I’m deeply involved with what’s going on in the curriculum, in and other pockets, so I’ve become very involved because of my study leave, the anti racism work, I’ve become very involved in play. I became very involved in AI. So all of those I’ve embedded completely into what we’ve been teaching and the disadvantage bit has become a theme right through the modules as well. So it has been completely embedded in everything, so it has affected curriculum design on what we’re teaching the students and they’re loving it.

They’re buzzing which is really nice. You know, I keep meeting them and they want me to teach them every day and it’s lovely so they’re really liking that it’s very current but it’s also very embedded in practice.  

And in terms of academics, more practice, more broadly, That’s where the networks come in because I’ve been feeding all of that into a network of special interest groups of IT providers across the country. So I’ve already led one session with them, and because of the connections I made, I’m bringing in speakers. So I’m trying to affect, for example, the way universities teach writing to primary schools, and I’m doing that by bringing people in from Ireland and just trying to shape things a little bit.

You emphasized collaboration networks. What strategies work best for building those connections during your scholarship leave?

So LinkedIn was life changing for me, and I had it and didn’t really use it. I thought it was just a bit of a pointless thing.  My daughter started nagging me saying, mum, you have to change your LinkedIn profile, and she started to lecture me on how to use it and I had to listen in the end.  And it really did change it, which meant she was right, very annoying.  

I’d started to really build those connections far more succinctly, and LinkedIn led to all the connections in play, all the connections with anti racism, all of that came from that. I’ve now met with the CEO of First News, in fact, this week as a result of that. So all of those have really built.

I had the advantage of having a few very strong networks anyway so, the UK, but I was able to put more time into that so I set aside time in my study leave to plan the conference. I couldn’t have done that without having study leave. I did it the previous year without study leave but I ran the conference here and that was manageable but not in Liverpool. So yeah LinkedIn was the main thing and then just nurturing those connections was probably the most important.

Looking back, which part of your plan was most challenging to achieve, and what would you do differently next time?

Oh, you know it. It’s the National Teaching Fellowship, I still haven’t done it. I’d arranged with Claire from the medical department, who’s going to help me and have a look at it and I’ve got everything arranged, but I just didn’t have time, and that’s so I set myself too many targets in that respect, and something had to give, and it was that.

It’s a huge amount of work and that I suppose is I what I would have done differently, I would have blocked maybe a month of time that was purely for that. I think that’s the way to do it, is block the time, but something else would have had to give, and it would have been the anti racism stuff because that sort of grew and I knew that was an indulgent passion in a way, but I felt that it really needed to be done. And I it’s paid off because that is now going to be something national, so it’s important. We have to do it. It’s organic as I say, we’re going to kind of a branch for the moment.

What do you see as the long-term impact of your scholarship work—on your own career, your institution, and the wider academic community?

It would be nice to be able to apply for professorship, I do feel I’ve got more than enough evidence for that now so I’m hoping that that would work but what it will do, it’s the wider thing for me, is that I’ve already been invited to Dublin as a guest to their Celebration conference. and Norway and Slovenia, and they want me to present my work.

The work I’m doing with my PhD is going to be significant and I know it is even though it’s only three schools  It’s challenging what’s happening in the world at the moment. Everyone’s saying there’s a problem with reading, everyone’s saying we’ve got so many disadvantaged children and our government’s answer is more phonics tests and what I’ve got is challenging that and so I’m really poised.

Hopefully now I’ve got enough that I can get small papers out and I’m able to so I’m going to present in Cyprus hopefully in February, Slovenia and Glasgow in July, so I’m getting things out and I was able to present to the DFE a little while ago, so it pushed me ahead of my PhD timeline I because I knew I needed to swing out the policy. So yes, that’ll be the plan.

Posted in Case Studies

Conversations on teaching for community and belonging: Our blog collection

Dr Emily Danvers

Conversations around teaching rarely happen beyond formal training opportunities that often take place early on in our careers. After this, and especially during term-time, space for talking  in our busy working lives is often limited. Incidental corridor chats are seen to generate a collaborative and positive working culture and, for some, were mourned during the pandemic and its aftermath. Indeed, post-COVID (or maybe this was always the case?), we rarely have time to talk to our colleagues about anything at all. When it comes to teaching, who do we approach when things go well, or not so well? Who is talking? Who is listening? And who cares? This was the premise for our project, drawing colleagues together across the then newly formed Faculty of Social Sciences, to talk about how we facilitate conversations about teaching, and our wider working lives, to enhance a sense of community and belonging for staff.

The conversation theme was inspired by Jarvis and Clark (2020) whose work emphasises how the informality of a conversation about teaching flattens power relations and allows people to make meaning together without the intensity of an agenda or outcome. They position this work in contrast to formal teaching observations with their traces of surveillance, performance and measurement. Too often we rely on these individualised encounters to ‘develop’ teachers, where in fact, authentic conversations might more meaningfully transform teaching, where colleagues hear something, share together and be inspired by each other in the everyday. This reflects Zeldin (1998, p.14) who notes that: ‘when minds meet they don’t just exchange facts; they transform them, draw implications from them, engage in new trains of thought. A conversation doesn’t just reshuffle the cards; it creates new cards.

With the provocation to encourage transformation through talking together, Emily, Suda and Verona set up an initial launch event for all the faculty to generate some initial conversations about teaching. The questions and topics that emerged from this focused on the following questions.

  1. Why is community and belonging important for diverse academic flourishing?
  2. How and where is community and belonging created and developed?
  3. How might the labour of community and belonging work become visible, valued and rewarded?

The 12 colleagues who attended were afterwards put in cross-faculty threes and connected by email, with suggestions that they meet again to continue these conversations. As project leads, we were deliberately hands-off at this point, as the purpose of this project is to see if and how these conversations form organically.

A couple of months later, 5 of us met to blog together for the day, about our response to the questions, along with other themes that came up along the way. What we share in this blog collection is the story of our collaboration conversations.

In Jeanette and Fiona’s blog, they talk about what we can learn from student collaborations, which are often and rightly prioritised in the work of higher education.

In Suda and May’s blog they write about the value of time and space to slow down the academic pace and to generate community.

In Emily’s blog, she talks about the joy and challenges of teaching across different disciplines and how collaborations are structurally challenging.

What we learnt from this project is the ethics and timeliness of the conversation format, as a collegiate response to the complex and evolving challenges facing the sector, our students and ourselves as teachers. We all relished time to talk and think about the uncertain, the tricky, the everyday, the thorny, the unequal, the caring and uncaring practices – all of this important ‘stuff’ that sustains us as teachers but has no space in our working lives. We also did not only talk about teaching but about other collaborations that we value.

Our recommendation is that teaching (and other) collaborations should be exploratory and conversational rather than only a tool for appraisal, What we are seeking is regular open and meaningful dialogue about teaching and academic working lives that is not  ‘done to academics at the behest of institutional leaders’ but conversations  ‘with or among colleagues, characterized by mutual respect, reciprocity, and the sharing of values and practices’ (Pleschová et al, 2021:201).

References

Jarvis, J., & Clark, K. (2020). Conversations to Change Teaching. (1st ed.) (Critical Practice in Higher Education). Critical Publishing Ltd.

Zeldin. T. (1998). Conversational Leadership https://conversational-leadership.net/ [Accessed 15.07.2025}/

Pleschová, G., Roxå, T., Thomson, K. E., & Felten, P. (2021). Conversations that make meaningful change in teaching, teachers, and academic development. International Journal for Academic Development, 26(3), 201–209.

Getting the ‘social’ into Social Sciences: how can we learn from LPS student initiatives to build cross-faculty relationships?

Jeanette Ashton and Fiona Clements

The broader context

When the new faculty structure at Sussex was first mentioned, discussed further at university-wide forums and School and Department meetings, our reaction was perhaps similar to many others. Whilst ‘indifferent’ might be too strong, our thinking was that this was a decision taken at a university leadership level which probably wouldn’t change much on the ground, aside from potential pooling of resources in a challenging higher education climate. We felt that any changes we needed to make would filter down in time through our Head of Department, but that, in short, it would remain ‘business as usual’ for the Law School. The ‘Conversations on Teaching for Community and Belonging’ initiative by Emily Danvers gave us an opportunity to explore how the new faculty structure might enable us to develop supportive relationships with colleagues outside of our department, what that might look like and how it may help us navigate challenges going forward post the voluntary leavers scheme.

In the last few years of her role as Deputy Director of Student Experience for LPS, Fiona developed a number of initiatives which were well-received by the student body. In this piece, we consider how we might draw from those initiatives to develop a faculty-wide space, but with a staff rather than student focus. That is not to say that building faculty wide student relationships is not important, but that, as Ni Drisceoil (2025) discusses in her critique of what student ‘belonging’ means and who does that work, staff community and belonging often takes a backseat. We conclude with some thoughts as to how we might move forward.

Community and belonging sessions for students in LPS – what did we do and why did we do it?

For the last couple of years in LPS, we have hosted a weekly breakfast or lunch event for students. We know that very many of the students find that the peers that they share their first year accommodation with end up being some of their closest friends – both at university and beyond. Friendships are also forged at departmental level but, in addition, we wanted to give the students an opportunity to come together, informally, and meet students from other departments in the school.

The get together would happen in the same room, the student common room, at the same time each week. As the Law school runs a two week timetable, it meant that different Law students would be available to attend, depending on whether it was an even week or an odd week. There was a small group of students from each department who would come every week, but we also had new faces at every get together – students who had heard about the event, or students who just happened to be in the common room when the event was happening.

When we asked the students about their motivation for attending, they gave a variety of different reasons. It was interesting to note that some of the students came to the event with the intention of seeking advice (perhaps about managing workload, or how to tackle their reading etc). The students found that having a casual conversation with a member of faculty whilst sharing some food was a preferable option to pursuing the more formal route of booking an office hour with an academic advisor whom they may know less well.

LPS Staff events: what can we learn?

In thinking about faculty-wide initiatives, it’s important to consider what is already happening in schools and departments and what we might learn from that. In LPS, we have online school forums, which are well-attended by both academic and professional services staff and a useful way to catch up on what’s happening at School level. In terms of more socially oriented events, we have regular ‘Coffee and Cake’ sessions, which are not well attended. Without undertaking a survey, we can’t provide reasons for this, but it may be that this being a Head of School initiative and booked into our calendars, gives the impression that this is a space where we might be able to socialise, but not share concerns and ask ‘silly’ questions. Time is of course also a factor, with events such as these falling down the priority list as we juggle competing responsibilities. An open plan office space for professional services staff is perhaps more conducive to those conversations than the academic offices, so it could be that a faculty wide space for academic staff would provide an opportunity to have those conversations, with the benefit of perspectives as to what happens elsewhere.

What might be possible in the new faculty? Some ideas:

· Twice monthly scheduled spaces at different times on different days, and starting on the half hour, to maximise faculty availability

· A small cross-faculty team to rotate the ‘host’ role and spread the word in the different departments. As discussed above, a friendly facilitator was pivotal to the success of the LPS student initiatives

· Keep organisation minimal, and be clear that this is not a leadership initiative

· Clear comms on the purpose of the space: to drop in, meet people, share ideas and concerns, ask questions in an informal space without needing to schedule a meeting

· No need for food! No need for themes!

· Don’t be discouraged if no one turns up. These things take time.

References and further reading

Ní Drisceoil, V. (2025): Critiquing commitments to community and belonging in today’s law school: who does the labour?, The Law Teacher, DOI: 10.1080/03069400.2025.2492444

For more on community and belonging for students see Moore, I. and Ní Drisceoil, V. ‘Wellbeing and transition to law school: the complexities of confidence, community, and belonging’ in Jones, E.  and Strevens, C. (Eds.) Wellbeing and Transitions in Law: Legal Education and the Legal Profession (Palgrave Macmillan 2023).

Slowing Down the Hamster Wheel: Space to Reflect and Create Communities.

May Nasrawy and Suda Perera

A cartoon hamster runs on a wheel labeled with academic pressures like “Publish,” “Teach,” and “Grants,” surrounded by the word “STRESS!” and thought bubbles promoting reflection and community.

Since we’ve been teaching at Sussex, it’s felt like we’ve been in a state of perma-crisis: Strikespandemics, financial losses, have all contributed to a sense that we are on the brink of imminent disaster and we need to react quickly to avert an impending collapse. In this context a lot of pressure is put on as individual academics to do more and more with less and less. We need to teach more students and give them extra support even though there are fewer resources. We need to bring in more grants even though funding sources are shrinking, and publish more research in an increasingly narrowing field of “world leading” journals. Failure to do this, we are told, is an existential threat to the University and could result in more job losses, including our own. This sense of existential dread has meant that many of us feel like hamsters in a wheel – desperately scrambling from one task to the next in an attempt to just keep going and hope that eventually things will calm down. But the calm never seems to come, and in this highly individualised and reactionary wheel of toxic productivity, we seem to have lost a sense of community and belonging. In this blog we consider: Where in this endless cycle of work and crises is there space to think and reflect on why we’re doing this both as individuals and as a community? How can we break the vicious cycle of individualism and reaction and instead foster an environment where there is space to think and reflect in a collective and collaborative way to build the kind of University that we want?

By participating in the Conversations for Teaching for Community and Belonging, we have come to realise that there is a community of like-minded staff members who feel similarly , and that the answer to these questions begins with time and space. Time to step away from the hamster wheel of toxic productivity. Space to reflect on our individual identities and sense of purpose. Space to support and be supported by our colleagues. And from that space to foster a wider sense of community and belonging. This space requires us to have protected and meaningful time to just think and discuss with each other these bigger-picture and wider issues, which are not easily captured in bureaucratic processes. So much of the day-to-day running of the university relies on labours of caring and collegiality, and yet so much of this labour is hidden and not celebrated or even spoken about. We don’t want these spaces to be just one-off lip-service events or individualised awards, but rather collective spaces to talk through issues and share experiences with no expectation of a measurable output. By setting aside time for reflection, we argue we can move away from these feeling of constant reaction to immediate crises.

In the short time that we’ve had to engage in conversations with one another in this small project, we have been able to learn about what colleagues across faculties are doing in their teaching and research, and also share experiences that are point to issues of both concern and hope. We have been able to foster a sense of openness precisely because there is no sense that we are in competition for some sort of reward at the end, or that we have to produce something to demonstrate “value for money”. While we appreciate that much of what we do on the hamster-wheel of productivity is part of the job, we argue that it shouldn’t take up all the space, and should not be moving us away from other essential elements of our practice that require us to slow down to reflect, learn, collaborate, feel, care, read, and think.

Cross-faculty teaching: favour culture vs collaboration

Emily Danvers

Across the faculty of social sciences, many of us share research interests, professional expertise and academic knowledge that shape the topics we teach. When we met to collaborate on this project, for example, we found most of us teach about issues related to education, social justice, and globalisation. Yet we rarely teach outside the confines of our disciplines and departments. And where we do, it is through favours and friendships, rather than anything structurally organised. Our compartmentalised teaching arrangements often produce a culture that can work against collaboration.

A couple of years ago, Jeanette and I met through a shared interest in education for those of Gypsy, Roma and Traveller heritages. This was an area I research, and Jeanette had just started a community legal education initiative, Street Law, in partnership with the community organisation Friends, Families and Travellers. She asked me whether I’d talk to her students about teaching. Of course, I’d love to. I liked her. I liked the project. It was my area of expertise. Why not?

I’ve since done this a couple of times and get a huge amount from teaching Law students who I normally would never get to meet. Thinking about their contexts, disciplines and experiences and translating my pedagogical knowledge to them is also a useful exercise in understanding what and why I prioritise as an educator.  But it isn’t in my workload. I don’t have to and am not, directly ‘rewarded’, in the very narrow sense of my own time.  Jeanette confesses in our collaboration project that she feels guilty about asking me. But we work in the same university and now in the same faculty. Why shouldn’t we teach across these artificial academic boundaries?

This raises questions about how much of academic life might be sustained by these sorts of favours. It reminds me of the complicated emotions of gift-giving, where the receiver bestows something with surrounding norms of exchange and appreciation. On the one hand, forging positive relationships and having reciprocal practices of care are important ways to navigate academic work and its pressures (Frossard and Jeursen, 2019). Doing this cross-faculty teaching was joyful and enriching – a ‘gift’ to me as well as Jeanette. Also, an academy where we only did what was in our job description would surely fall apart!

But, on the other hand, these practices lead to under-recognition of labour or overwork. Academia has long been organised into silos – whether departments or modules – producing a sort of bento-box style organisation rather than a rich, interdisciplinary tasty stew. It is only when trying to foster collaboration through teaching across departments that we notice how the structures and cultures produce or preclude the kinds of interdisciplinary work we may find personally enriching.

In reflecting on this experience, what becomes clear is the tension between the joy and enrichment of interdisciplinary collaboration and the structural barriers that make such collaboration exceptional rather than expected. While cross-faculty teaching can feel like a ‘gift’—personally fulfilling and intellectually stimulating—it also reveals the fragility of a system that relies on goodwill rather than institutional support. When collaboration is sustained through favours rather than formal recognition, it risks becoming invisible labour, disproportionately carried by those with the capacity or inclination to give more than is required. If we want to foster truly interdisciplinary, socially engaged teaching that reflects our shared academic interests and values, we need to rethink how work is recognised, rewarded, and organised. Moving beyond the bento-box model of academic life will mean embracing new structures that not only encourage, but also sustain, collaboration across boundaries.

Frossard, C., & Jeursen, T. (2019). Friends and Favours: Friendship as Care at the ‘More-Than-Neoliberal’ University. Etnofoor, 31(1), 113–126. https://www.jstor.org/stable/26727103

Tagged with:
Posted in Uncategorised

Getting the ‘social’ into Social Sciences: how can we learn from LPS student initiatives to build cross-faculty relationships?

Jeanette Ashton and Fiona Clements

Tagged with:
Posted in Uncategorised

Episode 11: Attention, Sleep, and Learning

This image has an empty alt attribute; its file name is Learning-Matters-Podcast-_-Episode-01-_-Scholarship-Leave-mp3-image-1024x1024.png

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our eleventh episode is on attention, sleep and learning and our guests are Dr Sophie Forster, Reader in Cognitive Neuroscience, and Dr Giulia Poerio, Senior Lecturer in Psychology. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome, everyone.

Recording

Listen to the recording of Episode 11 on Spotify.

Transcript

Wendy Garnham: 

Welcome to the Learning Matters podcast from the University of Sussex where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Attention, Sleep, and Learning. And our guests are Dr Sophie Forster, Reader in Cognitive Neuroscience, and Doctor Giulia Poerio, Senior Lecturer in Psychology. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome, everyone.

Sophie Forster: 

Hello.

Giulia Poerio: 

Hello.

Wendy Garnham: 

Sophie, could you start by telling us a little about your research and what drew you to this area?

Sophie Forster: 

Yeah. Sure. So I study attention. I tend to be particularly interested in understanding what’s going on in our brains when we do things like get distracted or, like, don’t notice things. What drew me to this area, I suppose, my sister was diagnosed with ADHD as a child, and I kind of saw how that really affected her at school. And, I always struggled a lot with attention as well. And it wasn’t really until I’d already been studying distraction and how impossible it is to pay attention, and, all of these problems, and publishing papers on ADHD that I realized that I also have the extremely heritable condition that my sister is diagnosed with. So, yeah, that was kind of how I got into it.

Wendy Garnham: 

And same question to you, Giulia.

Giulia Poerio: 

So I’ve studied lots of different things including sleep and I’ve done some research on attention or specifically daydreaming, mind wandering, similar to some of the work that Sophie’s done. My interest in sleep has kind of really been because I sleep quite poorly. And I noticed the effects that it has on things like, you know, cognitive function, emotions in particular, and, all of those sorts of things. So I recognize how kind of important it is for people’s well-being and it affects everything. And we spend a really long time doing it. So it’s really, really important. So I’ve recently started working on, a grant that looks at sleep, in depersonalization and derealization disorder, so symptoms of dissociation. So a lot of my work at the moment is focused on that.

Wendy Garnham: 

Not a problem that I’ve experienced yet not being able to sleep.

Heather Taylor: 

Giulia, how has your research shaped your approach to teaching and supporting students?

Giulia Poerio:

 I don’t know whether this is like I’m allowed to say this, but, I encourage students to—so I teach a module on sleep and mental health, for 2nd year psychology students. And, I encourage them to understand what their own individual sleep preferences are like, particularly around the timing of sleep. So if they’re a night owl or like a morning person. Most of us are tend towards evening type if we’re in adolescence. So I say to students, if you are an extreme evening type or you’re an evening type, don’t come to my lectures if they’re early in the morning. Catch up on, kind of re-recording when you want to. Because students coming in completely sleep deprived for a 2 hour lecture, at a time that is not suitable for their body, I personally think is a waste of their time.

Heather Taylor (03:30): 

I’ve heard about this. [Have you heard about this?] Yeah. From students. And I can’t remember who it was, actually. But you know what I mean. You’re sort of like if you’re going to be lecturing on this and saying how important it is to recognize what your sleep patterns are and when you’re most awake and alert and so on, then, yeah, they like it. Obviously, they like it. We should look at the attendance rate. So I actually reckon your attendance won’t be worse as a result of it. It might even be better. 

Giulia Poerio:

I mean, I have a problem with students being forced into going to lectures and the attendance thing anyway because, you know, a lot of students experience sensory problems. I experience a lot of sensory problems myself, and I would never be able to concentrate in a lecture ever. And so I think uni should be about understanding what your learning preferences and styles are and, you know, taking control of that and doing it that way. I mean, that’s probably not very popular with universities but that’s my personal view.

Heather Taylor: 

It’s popular with students. Yeah. 

Wendy Garnham:

As an extreme evening type, I wonder if I could use the same philosophy for meetings at 9 o’clock.

Giulia Poerio: 

Yes. One of my old supervisors, he had, delayed, sleep wake face disorder, which is extreme evening chronotype. I never saw him before 3 PM in the afternoon. And if I did, he’d done an all nighter, to be up in the morning. But, yeah, I tend to not have, morning meetings either.

Heather Taylor:

We were talking about that, but I’m like a super early person. Like, I’m, like, a 4 AM. 5 AM feels like a lie in, and I think and you’re right. I wasn’t like that when I was well, until I had to be, like, but I wasn’t like that when I was younger if I had the choice. So would it such a mismatch between me and them because I’d have them all in at 7 AM, you know, if I could. But I wouldn’t, obviously, because they’d all hate me and no-one would turn up and no-one would do any work. But, yeah, it’s tricky when you’re, like, got this do you know what I mean?

Giulia Poerio: 

Well, there is research on this. So there’s some research that looked at what the ideal times for university lectures would be based on undergraduate students’ chronotype. And they suggested that students should be waking up at some on average, considering all of them, the individual differences at 9, half 9, starting lectures anywhere between 11 and 1, and then going to bed at, like, half past 1 in the morning.

Heather Taylor:

Oh that’s fine. I’m awake in the middle of the day.

Giulia Poerio: 

Yeah. So but that would prevent if we did that, that would prevent students from being chronically sleep deprived.

Heather Taylor: 

So, same question to you then, Sophie. How has your research shaped your approach to teaching and supporting students?

Sophie Forster (06:02): 

So I think my research, because I study attention, I think I’m aware of, firstly, like, the massive impact on attention on learning , in terms of, like, like, just their studies where they followed even just children from the point they start school to the point they finish school. And, attention is 1 of the main things that predicts how well they’ll do, you know, if they attention age 5 will predict how well they’ll do age 17. Even when you control for other things like baseline maths and English, it just has such a huge impact, and it’s because it’s really the way that your brain is choosing what information goes in. And I think the second thing is there’s a bit of a misconception about attention that it’s, like, all down to the person. You know, we talk about things like people having an attention span or whatever, and this is like, oh, how long can you pay attention to? And that kind of puts too much pressure on the students, you know, that they’re just meant to pay attention to whatever we throw at them. And actually, that’s not how attention works. It’s like, it would be kind of rubbish if that was how it works. Because if you imagine, like, for example, let’s say you’re having, a picnic in the park, you know, and you’re talking to your friend, and you might feel like your friend is what you should be paying attention to. But at that moment, someone throws a ball and it comes towards your head. So if you had this amazing attention span, you’re getting hit in the head with a ball at that moment. Right? Yeah. It’s like your attention needs to have this reflexive element to react to things. So being distracted isn’t, like, a bad thing, it’s attention working properly. But what this means as teachers is it’s kind of our responsibility to engage the reflexive parts of attention. You know, it’s not just down to them, it’s down to us, so we have to, like, work for it. Does that make sense? Yeah. That was a Yeah.

Wendy Garnham: 

That sort of reminds me of active learning. Yeah. So trying to really, like, bring the students into that sort of hands on engagement experiential side of things maybe.

Heather Taylor: 

Yeah. And I think, yeah, I think I agree with you. I think it’s a bit transactional. Like, if I know that I’ve got a lecture that, say, I did last year and they didn’t seem that into it and, actually, a lot of the time, they’re not into it because I’m not into it. You know, like, if I’m not enthusiastic, they’re not going to be. So I have to figure out a way that I can get enthusiastic about it. I also have to think about, right, that was a bit heavy, that bit, so I’m going to have to chop it up with something else. You know, like, keep changing what we’re doing. And I and I’m quite easily bored, but I’m also quite easily, if I like something, I find it quite easy to stay with it, but if I don’t, I’ll just zone out. So I kind of feel like I do it based on whether I’m getting bored, you know, when I’m making it and when I’m writing it and when I’m delivering it. And it tends to work that way, but I think there is that, yeah, I agree with you. There’s a transaction. And, actually, I think if we just relied on it all being about the students’ ability to pay attention, no-one would really try and do anything good in their teaching. They just deliver. You don’t need a teacher then, actually. You could just give them a handout and go read that, and then you’ve got all the information. You know? So I like the idea that it’s a transaction, that we have to get there a tick. I don’t I mean, I don’t want to do like a TikTok and a dance and whatever. You know, I don’t I don’t want to go too far. 

Sophie Forster:

It doesn’t and it doesn’t like, it’s not like there’s only one way. I think, like, interactive stuff is a great way of getting people’s attention. If you’re really good at, sort of, telling anecdotes, or if you’re funny, or whatever it is, but something that’s a bit easier to attend to, right? And just being aware of those points and peppering them in, and sort of thinking about it in terms of, like, if you think about your lecture through the lens of, like, if it was a broadcast on TV and one bit of it just cut out, what would be the moments where if people miss that, they’re not going to understand anything else? You know? And if there’s moments like that, sort of being like, okay, I’ve got to really be like, okay, everyone? Even just sort of saying, okay, you re and I think that the other thing is not doing that. It’s ableist because, I mean, there’s huge individual differences in how much people can pay attention. And maybe some faculty are on the end of a spectrum that they find it easy so they don’t have this awareness. And it’s and they sort of think it’s about effort, you know, that if you’re I’ve heard people say, oh, a serious student won’t need, you know, it’s not really about effort. It’s just it’s individual differences. So, yeah. I think that’s my hot take. Being boring is ableist.

Heather Taylor (10:35): 

Yeah. That’s a yeah. That’s quite a good slogan.

Giulia Poerio: 

Yeah. Can I can I ask Sophie a question? Do you have thoughts on like how to re-engage people’s attention when they’re when they veered off? So like and how much of responsibility is that for students? So one of the things that I noticed that I can’t believe students do is try and write down everything I’m saying, and they’re paying almost too much attention. And then they miss something, and then they’ve gone off, and then they can’t get back in. Like, do you have thoughts on how I could improve that in my teaching?

Sophie Forster: 

Well, I Yeah. It’s difficult, isn’t it? But I I sort of think it’s okay if people in a lecture, I don’t know. For me, I sort of think the aim that people will take in a 100% is not realistic. So I tend to sort of repeat stuff that I think is like, I tend to give it multiple ways to go into that. Yeah. And again, it’s like it’s sort of like, what are the bits that are unmissable? What are the bits that if they don’t get that, they might as well have not been there? And sometimes I even resort to being like, if I’m really stuck, I’ll be like, okay, I’m really sorry. This next bit, I’ve tried really hard to think of a way that isn’t boring to explain it, and it is just really boring. And you are just going to want to tune out, But just let’s all just ramp up. Oh, that’s it. If you’ve got, like, you know, this much attention, use it all now. And then it’s why I’m because, I mean, I find that helpful Yeah.

Giulia Poerio: 

So you’re seeing when signposting to pay attention. Yeah. That’s really helpful, thank you. I’ll take those tips.

Heather Taylor (12:04): 

Yeah. Sometimes you do have to tell them this is going to be dry. And I Yeah. And it just is, but you need it. Yeah. Yeah.

You know what? Eleanor uses, Eleanor is someone else who works here. Eleanor uses, summary slides throughout her lectures. So if she’s going to talk about 3 or 4 different things within a lecture, at the end of each thing, she’ll summarize what she’s just said. So I think, I still haven’t applied that, actually, but I think that’s good for you know, when you say people get lost and they can get really stressed out if they get lost and then they’re zoned out and then they think, oh, why am I here? I need to know this to they don’t. And, also, they can watch you back later. But, anyway, I think the summary bits are like, oh, okay. I missed that bit. Or, oh, actually, I did understand the key bits I was meant to understand. So I think that’s another good way to do it. Yeah. 

Wendy Garnham:

I think just sort of setting expectations as well. Because I know in in mine, I tend to sort of put too much in sometimes, but I just tend to put the minimal amount on my slides. So, like, the main headings or the main sort of point. And then I tell them at the beginning, like, I’m going to talk to that point. I don’t expect you to get it all down, but they’re all recorded. So you can always go back and fill in details later. But we’ll be doing, like, interactive things during the lecture and that will give you a sort of a deeper understanding of what that key point means. But if you want the specific details, you’ve always got that recording then to go back to technology, permitting. But, that seems to work quite well as well. So they’re not taking down lots and lots of text from a slide but they’re taking down a main point then you do, like, a little sort of activity or you give an example or whatever. But because everything’s recorded, they’ve then got that backup to go to and that I think that seems to work well. I  used to put a lot on the slides and, yeah, it just you end up with students asking to go back to an earlier slide and it just disrupts the flow and 

Heather Taylor: 

They also all seem to have this idea that would they need to take notes. They’ll go, you know, come and see you – I can’t take notes in the lecture and keep up. Don’t then. No-one said you had to. It’s not necessary to embed information, in fact.

And you might remember things on a bit of a deeper level if you just pay attention at the time. You could scribble some notes down afterwards if you want. Some of them might not need to. I appreciate people don’t want to double up on everything, so they don’t want to sit in a 2 hour lecture and then do 2 hours worth of note taking, but they can’t do it at the same time. If they’re telling you they can’t do it, don’t do it then.  Why’d you keep trying to do it? [I never did it]. I didn’t. No. Yeah. I used to watch him back and go the odd thing. You know what I mean? Write the odd thing down. Also, they can watch me I tell them because they do it anyway. They can watch you at double speed afterwards.

Giulia Poerio: 

Oh, yeah. I tell them that. I’m like, slow me down, speed me up. Like, you know So they can pause me.

Heather Taylor: 

Exactly. So they can just watch it, and then the bits that they didn’t really get, they can go back to afterwards, you know? So, yeah, maybe that’s the expectation we need to tell them. They’re like, you all think you have to take notes. No-one said you had to take notes.

Sophie Forster: 

Because they also yeah. I always get students complaining that I don’t have enough text on my slides. But then it’s because I mean, it’s literally impossible to listen to somebody talking and read text that is different words. 

Heather Taylor:

Like, that’s just that’s not want you to say the same words, but that is the least engaging  yeah. They don’t really want that. Yeah. 

Wendy Garnham: 

Well, then if you did that, they’d then complain that you were just reading slides.

Sophie Forster (15:35): 

Well, so I have that exact problem, which is that I deliberately put loads of text on my slides so that they don’t have to write down everything I’m saying because all the core information is on the slide, and some of them don’t. I’m like, why are you writing down what I’m saying? Because it’s literally already there. I think that you can’t win because everybody’s so individual in their learning. But what you can try and do is accommodate stuff. So next year, for example, I’m moving all the text from the slides to the notes section.

You know, and that was a suggestion made by one of the students, which I really liked. But I really struggled to know what the optimal thing is. And maybe there is no optimal. 

Giulia Poerio: 

I agree it’s about maybe the students’ expectations though. Because you wouldn’t just watch a film and then be, like, writing down. Oh, the camera pans to a scene of someone opening a door and somebody comes through the door. You know? Like, you wouldn’t get much out of that film, would you?

Wendy Garnham: 

That brings us to our next question. And I’m going to ask you, Giulia, for this one first. What is one key insight from your research that you think could improve the student experience in higher education?

Giulia Poerio: 

I guess everybody’s really different. And the key challenge for individuals is to learn about themselves and self discover whatever that might be, and learn what makes you, and what you like and what you don’t like. And don’t feel like you have to do things that society or the university or other people expect from you if that doesn’t fit with who you feel you are.

Sophie Forster: 

I think an insight maybe from the field of mind wandering, which is also, an area I work in, is just that, like, there’s lots of studies of mind wandering in lectures. And from those studies, it’s fairly established that a lot of your students aren’t listening. If you just look at the amount of time people spend mind wandering, especially as time goes on, I think by the end it’s going to be more than 50% of the time, right? It’s like a lot. It’s not it’s not we’re not missing a little amount, so you just have to be giving a lecture in that knowledge. A lot of the signal is getting lost. So that’s yeah. Stuff needs to be repeated. Stuff needs to be flagged.

Heather Taylor: 

A common, like, annoyance for me is that someone asks me a question via email that I’ve said a million times in a million different ways. Right? But and then it what also annoys me is other students are going, why’d you keep saying that? You  know? Yeah. But you’re right. I think that, we can’t just assume they’ve taken also, sometimes, they’re just really anxious as well. You know, even coming into a big lecture theatre, they’re just really anxious sometimes. Some of the students are in a seminar room. They’ve got a lot going on And, you know yeah. I don’t know. I think I don’t know. That’s an important thing for me to remember. 

I think also this is unrelated to what we’re talking about now. But you know when you say you let your students attend if they want to or not? [Which is technically not allowed], but Yeah. I get last time, I got so personally offended by the lack of attendance towards the end of term. And I normally don’t. Right? But it really, like because I think they forget that you’re a person. Right? And I’m a person who’s prepared something. I’m standing up in front of a very big room with not many people in it saying it, and, you know, like, it damages my self esteem. You know? It didn’t used to. It’s just this term, for some reason, it wound me up. But I think, again and I did keep reminding myself of this even. Like, it’s not about me. It’s about them. My feedback from my module was fantastic. Right? So that cheered me up, you know, once I got to that bit. So I love them all again now. But, yeah, I think it is really important to remember both of your points anyway. You know, you’re about again, about you focusing on it being about them getting to know them and you about saying they’re not going to necessarily remember or pay attention to everything we say. I think it is really important, at least for me, but maybe I’m just overly sensitive, to remember that it’s not about me. It’s not about me at all.

Giulia Poerio (19:46): 

Yeah. I would rather students look after their own well-being than come to my lecture,

and when it’s deadline season. Like, why bother coming to a lecture if you know you’re not going to be able to concentrate because you’re worrying about something else or your mind wondering about your other concerns.  So just go, I’ll catch up on that lecture later. And now, which I didn’t have when I was at uni, everything’s recorded. So fantastic. Like, why put yourself through that if you don’t have to?

Heather Taylor: 

Yeah. I suppose there always needs to be – to differentiate though between people that are doing it because they’re prioritizing certain needs and they are doing the work lately. I know we’ve got students like that who don’t always attend everything, but they always have watched it or read it or whatever it is before the next session. They come full of questions, you know. But we get other people and we might assume that they’re doing that, but they’ve actually just dropped off. Like, they’ve disengaged for the whole term. And so it’s a really it’s almost like a really tricky thing to know when you need to intervene with someone. I mean, we have the, obviously, attendance policies and so on in place, but sometimes you’re intervening and there’s not a reason you really need to intervene. They’re managing. But, actually, I think in a lot of cases, they’re not, You know? And something needs to happen. And it’s a tricky thing. I really like it in your module because I think it’s speaking to what you believe in and what you’re lecturing on. I feel like I don’t know if it would work across the board. You know? It would make me super popular with students though.

Sophie Forster: 

But do you not think that, like, it’s a university is also about becoming an independent person and if you decide you don’t want to go to lectures. It’s not school. We’re not in school anymore.

Heather Taylor: 

No. No. I do agree with you in that. But I think as well, 18 is, like I know technically they’re an adult, but 18 is I didn’t, you know, I’m 41 now. I’m much more adult now than I was when I was 18, say, so there’s stages of adulthood. But, also, I do agree with you with that, but I think sometimes a lot of other things are going on. And if we just or if I, at least in my experience, just let it all slide, I’d be missing lots of people that actually needed support and needed me to reach out and go, why are you never here? 

Sophie Forster

Do you do that, do you look at people’s attendance rates and then you can see whether they’re, like, engaging on Canvas, and then you contact them?

Heather Taylor: 

Yeah.

Heather Taylor: 

We have a smaller cohort. 

Sophie Forster

So I was going to say, like, how does is that something as convenors we should be doing? 

Heather Taylor: 

I wouldn’t with the size of groups.  Oh, go on Simon

Simon Overton (22:13): 

I know the answer  to this.

Yeah. This is, producer Simon. So this comes out of a really unfortunate case and I forget which university was it? Bristol or Manchester, where a student, had not been attending and hadn’t been noted. So as coordinators, what we do, during term time, we, every 2 weeks, we get a little report of people that have been absent, and we send them an email, but it’s kind of a it’s not a horrible email. But it’s very it’s very sort of officious sounding. So we send that to them. But what I do is I leave it a few days and then I will send them a personal email and just say like, how’s your studying going? You know, are you alright? And I find that that gets much better responses. But I think that’s kind of that for me, that’s, a reason to be concerned about attendance and to make sure that people are coming along. And again, because they are such young adults or be adults, I think sometimes they do need a bit of extra looking after because it might be the first time for them.

Sophie Forster: 

 I wonder whether attend attendance versus engagement is a different thing. So if you’re engaging online on Canvas, and you can see that they’re watching the lecture recordings or their engagement is high and their attendance is low.

Giulia Poerio: 

I usually so I have a final year module that is smaller. Right? So it will have, like, 60 max on it. And so for that, it’s more feasible, and I guess I do establish more of a rapport with my students. And, I do usually, especially in the early weeks, I will check who hasn’t shown up and kind of check-in on them and email them. Because and quite often, when I email them, it will turn out that it’s an issue. And sometimes me reaching out actually is what encourages them to, it kind of almost helps them a bit because they’re sort of like, okay. Somebody cares and is checking in on me. So I think it can, you know, I do it in a nice way. I’m not like, where are you? It’s more like I’m just making sure everything’s okay. You know? And do you need any help to come back?

Wendy Garnham: 

That’s one of the underpinnings of the doughnut model that we use in Foundation Year is the idea that we have to reach out rather than them sort of coming and telling us. But I think also there’s a sort of another issue for me which is just the ones who attend a lot seem to have the strongest friendship groups. And I think those who don’t attend tend to drift on the outskirts of the friendship groups, and they’re often the ones that report feeling quite isolated, quite lonely. So I think sometimes attendance, if you view it in a different way as being, you know, you’re sort of getting to know people, you’re using it as an opportunity to network with people on your module. I mean, obviously, there are there are barriers for some to do in that, but at least it sort of gets that expectation that you’re all in it together. You’re all here to sort of, you know, take what you can from a teaching session. But that’s sort of one of the other reasons why we sort of we monitor attendance is just and do that reach out. You know, you’ve your attendance has dropped quite significantly. I suppose a bit like you were saying, Simon, just, you know, is everything okay? Don’t forget there’s loads of support. Just, you know, let us know. And we use the 1 to 9 system, so we just get students every 3rd week to send your academic adviser a number between 1 and 9, and anything below a 5 they’ll get back to you, they’ll arrange to meet with you, we can see what’s happening. And that the  uptake in terms of students responding has been fantastic. And it just, you know, they don’t have to explain anything. They don’t have to go into detail. They don’t have to put anything in an email. They can just send you a number. And that can then just trigger the whole sort of sequence of support.

Heather Taylor (26:10): 

I do think it is still worth weighing up with. We’re saying we’re reaching out to them for their well-being, we want them to attend for their well-being. But research would suggest that their well-being is being impacted by us getting them to attend things without them having sufficient sleep. They’ll probably get more stress as well if they can’t attend, like, as in paying attention because also because they haven’t had sufficient sleep. So it’s really tricky to balance because we can’t have every student learn between 11 and 1. We struggle to fit them in between 9 and 8.

Giulia Poerio: 

No, starting between 11 and 1. So you start

Heather Taylor:

Oh, starting.  But still it’s hard in this. Do you know what I mean?

Giulia Poerio:

 Practically, it’s not feasible.

Heather Taylor: 

Yeah. But, you know, if we were if we so we’re sort of like reaching out, trying to look after their well-being for things that’s already happened. You’re actually trying to stop things from happening. You’re trying to stop their well-being from deteriorating. So it’s a really tricky balance.

Wendy Garnham:

Yeah. But it probably does. I’m just thinking, like, in terms of, when I used to teach in secondary schools and there it’s like an extreme where they have to be in school at, like, 20 to 9 in the morning and, like, the number of sleep deprived children coming into school is ridiculous. So, yeah, it’s sort of I think the whole education system needs to 

Giulia Poerio: 

And actually society because most people don’t fit 9 to 5 either. You’re really lucky because society is, biased towards morning, really early morning types.

Heather Taylor:

I know people think that morning people are really good. 

Giulia Poerio: 

No. But they do think they do think evening people are lazy. People think you’re lazy if you’re an evening.

Heather Taylor: 

Yeah. But I’ll stop by 4, you know? Like, that’s a long that would have been a long day, though. I don’t have  to go into 4.

Giulia Poerio: 

But you’re definitely working. You’re definitely working that out, like, the hours that people would expect you to be working.

Heather Taylor: 

Yeah. Yeah. Yeah. Well, that’s the thing. I think if they get an email from me at 6 in the morning, they’re like, wow. Heather’s so on it. But you’re not going to get an email from me at 6 o’clock at night, but most of you probably do send emails at 6 o’clock at night.

Simon Overton (27:59):

So in my normal role as coordinator in in Global Studies, we’ve been reviewing the courses for the year in Anthropology and in International Development. And in both of those meetings with mostly different sets of, faculty members, it’s gone to the same thing. Now, obviously, we talk about AI because we always talk about that. But it’s very much gone back to attendance and engagement, and how that drops off. And a lot of people feel, a lot of the lecturers feel that, that the timing of their lectures is really important. And everybody’s after certain sort of times. And a lot of them have tied together the time of the lecture with the, attendance overall. It’s just funny for me. It’s really it’s come up so many times. One thing that was quite interesting for me was that, students who live off campus and are expected to come in even for 10 o’clock, they’re paying a higher price for getting on the same bus, which I had not really thought about as well, which can be really, really tough for them. The other thing that we struggle with a lot, and we try to avoid it through timetabling, is when you have a sort of a split shift. So you have a lecture in the morning and then nothing for 5 hours. And then something else in the evening. And we can sort of mitigate that by switching around, seminar times and things. So I suppose my question is, how much do you feel that there is a correlation between engagement and the time that sessions take place?

Giulia Poerio: 

Sounds like an empirical question to me. I don’t know because I teach at the same time, every week. So personally, I can’t speak to it in in terms of whether I’ve noticed that I taught 11 till 1, which I think was quite a good slot, for attendance. But I would imagine that they would be, you know, very early and very late, I think, is difficult. But I don’t have data on it, so I can’t 

Heather Taylor: 

You should look at that.

Giulia Poerio: 

It’s an empirical question, isn’t it?

Sophie Forster: 

I’ve had different lecture slots and it for sure makes a difference. I mean, but you just it’s you teach a bit differently if you’re but the it’s just like, for example, if it’s in the morning, a lot more people are going to be using caffeine. Do you know what I mean? So it’s like, it just, and it’s, and if it’s the end of the day, people are just going to be a bit tired, you know, mostly, and, or I’m going to be tired also. Like, it just, but so I think it for sure. And it’s also just, like, if it’s really early, like, I think I had 9, you’ll have people not show up and you’ll even I don’t know. It’s just it’s yeah. It for sure makes a difference.

Heather Taylor: 

Yeah. Thursday mornings are a nightmare as well because it’s a big student night on a Wednesday that doesn’t finish till, like, 4 AM. And so, and I’ve had students come to 9 AMs from straight from there. Yeah.

Giulia Poerio: 

Well, that’s dedication.

Heather Taylor: 

It’s kind of not nice because of this smell of alcohol at that time in the morning. But, yeah, that is dedication. Yeah.  

Sophie Forster: 

That’s definitely what I did when I was. Yeah. So stupid. I literally never don’t know why I went. Yeah. I would always show up, and it never occurred to me that, you know, I might as well not be there.

I think this is the thing as well students are so anxious about the attendance policy. If they don’t, if the pin didn’t work, I’ll get a million emails saying, oh, the pin didn’t work. You know? All of this and, you know, they’re there. They’re engaging. So 

Giulia Poerio: 

You know something that Wendy was saying earlier though that I was thinking, it reminded me of, so I think some I’m skipping it well. Something that because it took – I was always really aware of the need to get my students’ attention and to get them to engage, but it’s easier said than done. Right? And I tried to have lots of interactive activities, but it did take a really long time to get them to be working. Like I think for quite a while, they were a bit chaotic and sort of, you know, people roaming around the room aimlessly and, you know, and I think for me, a real turning point was actually pandemic in 2020 because we had to suddenly, we got told last minute we will have to do it all on Zoom. And so I just was, well, you know, if I’m giving a lecture on Zoom, this is like, no. Like, I might as well just record it. So I decided to just record all my lectures and then use all of the lecture slot for kind of a fairly, like, free flow interactive thing , but with a bit more room to experiment. And it really helped me to establish a rapport with the students because it was like I kind of I got to know them a bit more. And I think what I learned in that term was, the things that worked the most were exactly as you said, the things that get them to make friends. You know? And I had feedback, at the end that somebody said, like, oh, you know, I didn’t expect to make new friends in the final year, but I did. And like, we made a really good new friendship group on this module. And like, I was like, really pleased. I was like, that’s a reason to attend, right? Like, if it’s fun, if you get to meet people and if you’re given activities that allow you to kind of that’s just a way of, like, building in a reward to attending, like, making it sort of rewarding. You know?

Sophie Forster: 

Yeah. I think but then you’re doing the attendance not for the academic outcome, for the social outcomes, which actually I think are probably go together though.

Yeah. We do go together. They for sure do. Because, I mean, reward is, like, a really, powerful way of engaging attention. So if you can, like, leverage social rewards to get people to be there, and then they’re going to talk about the content. And maybe if they’ve all been mind wandering at slightly different moments, they’ll be afterwards, like, oh, what was that bit about that? You know? Yeah. And so they just it it’s really helpful to them, I think, to 

Wendy Garnham: 

I think it also sort of values their individual opinions and sort of perspectives as well because we can sort of deliver information to them. But just getting them to think about the theme or the topic that you’re covering and feedback can be really powerful.

 I know one of the things that I tried this, last term was, getting students to do a little activity in I think it was the 2nd lecture, and they had to sort of put their ideas and experiences on a little card. So we were learning about the self and I just I hadn’t really sort of intended it to have the effect it had but I just said, you know, just for now I want you to just write a little paragraph which is all about you. Just, you know, I’m going to ask you to do this and put it into practice and then we use that as a sort of starting point for the lecture. But what I did after, I thought, well, actually I’m just going to go through and I’m going to email each of the students and just respond to what they’ve put in there. And, like, the feedback I had was just incredible, like, the students saying I feel heard, you know, I didn’t expect you to email everyone. That was really nice, you know. Thank you so much. Yeah. And it was just, wow, so I’m going to do that now. Every time I’m going to start the same thing. 

But it just I think the little things can be really, really powerful. I don’t think it has to be about, you know, creating, you know, hugely complicated activities or it can be something really quick. Like, we did that in, like, 5 minutes. It took me a long time to email them all but anyway, but it was worth it because then I felt like I’d sort of got that connection with the students. Like, they knew that we were doing things because they sort of mattered or because, you know, I wasn’t just doing it to tick a box. It’s like I really was interested in in what they had to say and I think that that also is another way to sort of try and engage their attention is like, we’re interested in what you’ve got to say or interested in your perspective, your experience rather than I’m just going to tell you what the textbook says or, you know, I’m just going to sort of show you my slides and off you go, see you later. It’s about that sort of I guess that connection, social connection.

Sophie Forster (36:18): 

Yeah. And there’s a lot of evidence to support that just making something self relevant in some way is just tags it as being intentionally important in a really basic way. It’s not like you decide to pay attention. It’s that your brain will just prioritize it. You know, even when they do really sort of contrived experiments where they’re like, learn that this circle is you, this triangle is your friend, and this square is a stranger. And then, like, I say, contrast. I think this is a line of research by somebody else that I very much admire, but I mean very simple. Right? And just by learning these connections, people will attend more to the circle just because you’ve been told the circle is you. And in terms of, like, the brain regions that are usually, involved in your attention being drawn to, like, brightly coloured things or, you know, they’ll have the same effect just from being tagged as being. So this establishing a rapport is actually it’s like doing something that helps you to helps your brain to focus, you know? It’s not just about 

Giulia Poerio: 

It’s so hard to think about how to do that in a large group teaching situation, I think, though, because I mean, I have tried to do things like that where I get them to complete measures and then they put on Poll Everywhere their scores, and we can talk about the group as a whole. But to do something individual for upwards of a 150 students, I would love to be able to do something like that. I just don’t, I mean, it’s not feasible, practically, especially with increasing student numbers that you know, so it’s not too bad a lot 

Wendy Garnham: 

My realization, because I had a 155 students, and I thought it would just be a very quick. Obviously, they weren’t all there for the lectures. It wasn’t quite that many I had to respond to, but it didn’t make me realize, like, what I’d started. And then, of course, once you started, you feel you’ve got to finish. So, yeah, it wasn’t a quick process, but I just persevered with it. And, yeah, that just the reward back was massive and just the engagement because I suppose it just be being right at the beginning. I’m not sure if it would have worked as well if I’d done it further down the line, but having that buy in at the beginning and sort of showing that, you know, we do really care about what they’re bringing to the table, that just sort of set the baseline in a way. But, but, yeah, it did. As Heather will attest, as I sat in the office typing many an email.

Heather Taylor: 

Yeah. Yeah. I do remember that. Yeah.

Sophie Forster: 

Another way though I mean, accessing self relevance as a way of getting attention, it can be as simple as just examples you use, you know? And like, if you, you know, if obviously interactive stuff is like almost the peak of like, get, but I mean, even just if you get if you’re saying something and you give an example, giving an example that’s likely to be relevant to the people you’re talking to is probably going to be understood. But then it gets tricky because then you’re like, is it going to be more relevant for some students than others? You know? Is this going to be like and are these sort of benefits going to be, you know, only for sort of the students that are in the most sort of common demographic? And am I really capturing everybody? So then you have to give, like, a variety. It’s hard. 

Heather Taylor: 

I think that with that with the relevance thing, they actually quite enjoy it if you make it relevant to you. So, you know, you’re saying about the anecdotes. I’ve got, it’s always by accident, but I’ll do an anecdote. Right? And I’ve got and I’d and I’d never really thought about it that much, but this year especially, I think I went off on one a bit with the anecdotes. Right? And I’ve got lots of really good written student feedback about the anecdotes, helping them understand and so on. But I think it probably does help them understand, but I think, again, it’s that rapport building. So they’re, I don’t know. I think every year I’m a lecturer, you know, like the 1st year I was a lecturer, so I’ve got to be like a lecturer, Right? Which isn’t me, you know? And, like, try and pronounce my words a bit better and you know what I mean? Wear a shirt. Right? And then as it’s gone on, I’ve just got less and less interested in what I’m meant to be and just being more me. And I do feel that every year, the rapport I build with the students gets better as a result of this, and it breaks down the hierarchies and so on. So I think it doesn’t always have to be relevant to them. They do like a bit of stuff that’s relevant to them. Right? They do like to discuss themselves. Everybody does. But well, not everybody. A lot of people do. But if you can connect with them in some way by telling them some silly little story about you, that also that is engagement. That is interaction.. They will laugh or they’ll go, oh, you or whatever. You know? And then you’ve had a conversation more than a lecture. And I think that’s, yeah. That really works as well. Definitely. Yeah.

Sophie, can you share an example of a change you’ve made in your teaching or practice as a result of your research? What was the impact on your students?

Sophie Forster (41:13):

 Any moment where I’m losing my students’ attention, I’m not teaching them.  Like this is how I see it, you know? And that’s not to say I have to like beat myself up because I’m going to lose their attention repeatedly. I mean, I’m sure my students spend half of my lectures not paying attention to me. Right? I’m not saying I’m this amazing attention getter, but it’s, I guess yeah. So it’s just like I try really, really hard. I try, I try to always. I try to think about how long in between things that might be a bit more engaging, you know, there is. And yeah. And I think the interactive stuff and it’s, I think the biggest thing was just giving them, seeing the benefit of, interactive stuff that actually allows them to connect with theirpeers was something that I was surprised by. I hadn’t quite realized, so 

Giulia Poerio:

 Probably not related to the sleep research that I’ve done, but I was initially an emotion researcher. So I think that feeling is like the most is more important than thinking in many ways and influences our thinking. So I think the things that I try and do is are to influence – not manipulate – but, try and get emotional reactions from students. And so that they don’t they don’t get bored, I think, is the key thing. So I’ll, do things like I will only talk at them for a certain period of time. And then I spend ages finding video clips, YouTube clips, things like that to show them that’s much more, like, it creates, like, an emotive reaction. Like, it might be a sad news story or it might be something from a court case. Or I’ll create materials that are, like, kind of quite fun and illustrate a concept of something. Like, for example, we looked at, the parasomnia defence which is this idea of can you kill somebody in your sleep and they had two case reports and they had to do things. So, what I’m always trying to do is make people’s emotions a bit more variable throughout the time that I’m with them to try and keep them going.

Heather Taylor: 

This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes as well as articles, blogs, case studies, and infographics, please visit Podcast | Learning Matters

Tagged with: ,
Posted in Podcast

Episode 10: Transformative Learning

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our tenth episode is ‘transformative learning’ and we hear from Dr Adhip Rawal, Assistant Professor in Psychology in the School of Psychology, and Chris Stocking, Lecturer in English Language in the School of Media, Arts, and Humanities. 

Recording

Listen to the recording of Episode 10 on Spotify.

Transcript

Wendy Garnham:

Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Transformative Learning, and our guests are Dr Adhip Rawal, assistant professor in psychology in the School of Psychology, and Chris Stocking, lecturer in English language in the School of Media, Arts, and Humanities. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome, everyone.

All:

Hello.

Heather Taylor:

Can you give us an idea of the kinds of discussions you’ve been having around transformative learning and what that means to you? 

Chris Stocking:

So the discussions that we’ve been having, they go back a good few months. Adhip and I, we sort of became aware of each other’s work after we won the Education and Innovation awards for our respective projects. And Adhip’s, if I remember, was on stories from the heart, who we are, what we are, and it really resonated with me in some research that I’ve been doing. So I had a look at Adhip’s profile and found a paper that he’d recently written, called Deep Calling, the Will of the Heart. And reading through this, I realized that I had found a very kindred spirit here at the university that was thinking about knowledge and understanding and learning through very different perspectives using different concepts. So I reached out to him, and we met, and we began discussions. And I think at that stage, we hadn’t called it transformative learning at all. This is something that we’ve kind of given it this name at this stage, just as something that we can start to build around.

Adhip Rawal:

And I think there was something, so we didn’t use those words, transformative learning, and we haven’t used those words yet. I think the words that we were using were sort of being and belonging because they spoke to the respective strands of the work we’re engaged in. And I kind of felt when Chris messaged me and I found out about his work that these were sort of two strands that ought to find each other. You know, just like people have to kind of find each other to do something creative, it also seems to me that there are certain themes that have to find each other. So this connection between the themes of being and belonging seemed very important to me. And I think one aspect of that is, and that relates, I think, to what transformative learning means to me, is that we kind of make an invitation to each other to be closer than people have ever perhaps been in academia. So we make this invitation to each other, “let us be closer to each other”. And in this atmosphere of closeness, something will be generated that could potentially be transformative.

Heather Taylor (03:12):

By closeness, do you mean sort of in my mind, I’m thinking of sort of closeness in terms of sharing, maybe without barriers or without sort of a little bit of personal pretence, you know, which there can definitely be in academia. I think students as well can be concerned sometimes about oversharing, you know. So by closeness, do you mean this sort of sharing of experiences or sharing of thoughts? 

Adhip Rawal:

I think what I’m speaking about is sharing at a level of being that don’t necessarily come into contact with each other anymore. So Chris spoke about this paper that I wrote, Deep Calling, the Will of the Heart, so I think what we’re speaking to really is that we kind of now need to make an invitation to reinstate a very old channel of human communication, which is a heartfelt connection. And that heartfelt connection, that heartfelt way of speaking to one another and orienting to the world, can somehow be regenerative. So I’m saying that a person can begin to recover certain things that they ought to know about themselves and that they ought to be doing in the world.

Heather Taylor:

Like a sort of… a reawakening from having these connections? 

Adhip Rawal:

So that’s exactly it. So I think that’s the beauty of the connection because we have stopped including the heart, I think, in how we work and how we speak and what we do in academia, and now it’s lying asleep. It needs some form of impulse to start awakening again, speaking again.

Chris Stocking:

Yeah. So I mean, it’s interesting, you know, when I’m speaking with Adhip, and the words, the language that he uses, this idea of being, because a lot of the work that I do is looking at 

becoming. And so the way that I frame and understand these discussions and this proximity, this closeness that we’re engaged in is through kind of this collaborative becoming. It’s kind of leaning into these unexpected avenues of thinking, of discussing, being vulnerable enough to respond, to stumble in discussion, and to find sort of the edges of one another and to start to sort of, say, dream beyond those. So there’s a lot of different kind of epistemic framings that we’re engaged in, this discussion, this idea of the heart, of dreaming, of dancing. I think when we start to use different terminology, and this is what we’re engaged in, this is what we’re practicing, this is what we’re allowing ourselves to investigate and stumble and feel our way into, we start using these different concepts or terms to talk about learning and this experience of listening and speaking and thinking and feeling from the heart, the soul, the spirit, it allows us to start thinking about learning in different terms, different ways. And for me, there’s something very generative and creative and earnest in this. It doesn’t need to fit back into a certain research paradigm or a certain disciplinary framing. It’s indisciplinary in that sense, so we’re drawing on a lot of different concepts. We’re investigating teaching in very different ways, Adhip, through his work within psychology, and my own with interculturality and these intercultural spaces, which are very dynamic, rich, fixed, and tender spaces. There’s a lot of contestation and negotiation that is required. There’s a lot of care that is required. And it’s something that I notice a lot when I speak with students that I teach is one thing that they continuously say is that they felt it was a space in which they were cared for. And that, I think, is very important. So in this extension, this invitation that Adhip has offered by thinking about the heart, there is within that an invitation for thinking about how we care within the university and ways that we frame discourses around students who are people with sometimes very complex lives and how we acknowledge that and talk differently about the people who are here at the university.

Wendy Garnham (07:48):

It sort of raises two things for me. One is that sense of personalization, which it seems to be much more about the individual and directing the learning to that individual. But also, I wondered if you sort of thought about how this relates to the sort of the whole idea of 

risk taking and learning, because it sort of feels a bit more like it’s like that sort of caring, supportive context within which you can take risks, and you can stumble, and you can make errors, but that is how learning is most effective, I guess, in some ways. So I don’t know if that is sort of a true reflection of what you were thinking with that.

Adhip Rawal:

Yeah. I think it’s a very, very nice reflection, actually, a very nice question. Because I think what you’re speaking to there really is something very, very important that there’s a kind of a recognition or at least a possibility that what the individual has to say is important and could be of profound significance. Maybe they have something to say that has never been said in the world and could never be said without this person speaking it. So I think that sort of openness to that possibility that is very, very important to be open to who is this person in front of me and to pay attention to the more subtle realities of this person, has a huge possible reward. The risk that I take could come at the reward that I begin to speak myself into the world.

Chris Stocking:

There aren’t neat ways that we can evaluate or judge this. And I think, you know, similar to this discussion that’s taking place right now, there is this sense that we are 

co-creating meaning together, that we’re sensing one another, and that there is this risk of saying things that perhaps don’t transmit. It’s a discussion that Adhip and I actually had before coming to take part in this podcast of how can we ensure that what it is that we’re saying is transmissible because we’ve had these discussions. We’re coming from, there’s an implicit understanding that we have and an acceptance towards one another to speak in these ways. And so how do we invite within our learning spaces these kinds of discussions and this kind of speaking to emerge and for the people in these spaces to find that sense of caring towards another person so that they can take these risks, that they can be vulnerable, that they can open up, and that they can share. And then how does this link back into disciplinary frameworks, institutional frameworks, the learning that takes place in the university and the constraints that come with that? Because, you know, what we’re talking about here, I don’t think it’s something that lends itself necessarily to the kind of evaluated knowledge production that takes place at a university. where we’re looking at starting a program, a module, being guided through that of taking on this knowledge and then shunning that we understand this knowledge and that we can be creative and critical with it. But if we haven’t had a space to authentically start speaking from our own sense of being and that becoming that happens when you do start speaking with somebody else, that’s it’s very difficult to start. Where do you start with them? These are the kinds of discussions that we’ve been having. It’s one of the reasons why this idea of transformative learning, I suppose, this the name that we’ve given at this stage, it’s something that we still we don’t really have a neat definition for because we’re still talking through all of the other ways that learning could be taking place in university, all the other ways that we could be, showing care towards the people who are here and the learning and what it means to them. 

It reminds me a little bit of, a journal that I know that Sarah (Watson) has been involved in, POESIS, which is the creative act of knowledge. And she’s been writing about how when we’re in a lecture, when we’re in a seminar, from our perspective as educators, we’re using words in a way that we feel is transparent. We feel that the words are stable and that when we use them, they’re transmitted across and that whoever’s listening receives the words and understands them in the same way that we’re presenting them. But there’s disconnections all through there. There’s disconnections that come from learning the words, from performing the act of being a student, of being in a lecture, performing the act of listening to take notes. These are all very different ways of listening and being and becoming with somebody else that have been taking place with the conversations that Adhip and I have been having, which has been I think I use this term perform, this kind of collaborative becoming. In that a call is put out, it’s received and responded. And so we’re kind of creating knowledge in this way that takes us into strange territories. So I think it’s trying to understand how all of these processes, what it is that we’re talking about, how that can then extend into the university to challenge it, to give different spaces of learning and being and becoming.

Wendy Garnham (13:45):

So it sounds as though I mean, that really gives us an idea of at least some of the challenges. Are there any other sort of challenges in terms of actually getting transformative learning happening?

Adhip Rawal:

So, I mean, I think if I speak to that from experience, then I think one of the challenges that is involved in that is that, you know, what you’re trying to really do there is you’re trying to present the possibility that what we do in a classroom needs to be fundamentally different to what we do in daily life. Because you may have this kind of assumption that what we draw on from daily life is not enough for us to transform ourselves. So we have to do something different. Something different has to be in the classroom, operate in the classroom to allow that from happening. And of course that requires you to be free fundamentally to do those things. There’s an inherent, I think, obligation to suspend what might be expected of you and to do things differently if you really believe that something like you were speaking to this POESIS, which is really based on the idea that there’s something absent that my speaking can make present. Now that, you know, is fundamentally, I think what we’re talking about. Right? That, we need to be so be really aware of the relational responsibility that we have towards each other. That my speaking can be an act of your becoming. And I think so that of course, I mean I think so the point I want to make so sometimes, you know, colleagues say to me or students say to me, Adhip, why are we doing this again? This is going nowhere. And you have to, I think, have the courage to say, I understand that, but let’s go there again. Let’s go again to nowhere. Because I think that’s fundamentally what is involved. Right? All the other places we’ve been to already.

Chris Stocking (15:47)

Yeah. And I think other challenges Adhip was talking about there, what we’re doing in daily life, the impositions that we’re facing through daily life in, you know, our modern world and the – without sort of getting too much into politics and but there was framings of higher education that exist today that exert an enormous pressure over students, the way that they internalize this sense of going into debt to get an education, but the reward there linking them to a job, something in the future. But how all of this is couched in neoliberal ideology and terminology and the pressures and that kind of flattening out of identity into something which is quite, well, one dimensional in that sense. So when we engage in these kinds of discussions, Adhip and I have been having, we recognize the value of these discussions as opening up space for a different kind of being together and collaborative becoming and construction of knowledge and thinking, the way that we experience the world, the relational encounter with one another, and how that spills out into the way that we encounter other people, other members of the university, the students who are here at the university, the family. There is this an extension there which is challenging these implications of living in the modern world. 

But it’s one of the issues that comes from that is thinking about the value that students perceive from having these kinds of discussions, engaging with this kind of thinking and being together, and whether they can see value in that knowing that there is debt being incurred to get the education and that the education is a pathway to get a job to meet the material needs of the future. And I think that pathway has a lot of implications for how we experience learning, how we experience teaching, and how particularly the learners understand what’s happening at the university. And so it’s a question also that that comes to my mind is where do these kinds of spaces that we’ve been discussing exist today in higher education and education in general? Because I certainly can’t remember any point in my education having these kinds of discussions that we’re having today. And they are for me, they’re such an important part of the life that I have here at the university. To have found somebody like Adhip and to have these conversations, it allows me to reengage with the university, to reevaluate what it is that I’m doing, to perceive the people here at the university differently and otherwise, and so to create different relations with them. And so it’s that question again that it comes back to how can we frame this kind of learning in a way that is valuable to students, there are other constraints and pressures on them to go through university and the learning and what learning means?

Wendy Garnham (19:26):

It sounds as though relationships are really at the heart of it, sort of relating to each other. So I think that’s one of the key takeaways, I think. 

Heather Taylor:

Yeah. I think as well, you know I mean, you sort of inferred this at different points, but it is a bit of a tricky situation where you’re talking about something quite deep and maybe, like an organic kind of thing. And when you look at, like, marking criterias and learning outcomes, it’s all very intellectualized, and it’s kind of hard, I guess, to intellectualize. It’s almost paradoxical to intellectualize what you’re talking about. Does that make sense?

Yeah. And I think that I guess, I suppose I mean, how do you – I know you’ve done some work, you know, with the life stories. And how have you, in that respect, sort of because of when you’re doing that, you’re not asking people to, you’re not calling it transformative learning. Right? How are you approaching it so that you are getting this sort of essence and this, I guess, making these people comfortable to share their stories in this way?

Adhip Rawal (20:11):

There is a couple of things that are involved in that. So I think one thing that is involved in that is that I need to be able, as an educator, to also show the student that what they have to say also matters to me.  And if you say something that comes out of your heart, of wherever, I’m prepared for that to change me as well. That’s the assurance that I give you, you know. So it’s that as well, you know, that sort of recognition that, what the student has to say. So it’s not necessarily, you know, we’re going to explore the meaning of life. We’re going to explore the meaning of your life. And that change, I think, is very, very important. When the student hears the change in that question, they want to speak. And then it’s up to you, you know, to stay with them for however long they want to speak. Right? So you make this sort of invitation and I think it’s quite remarkable what students, when they feel safe, begin to say and what they begin to communicate. You know? And it’s up to you to allow that to happen. Right? That, you know, that you take an interest in them to the degree that they feel okay. What I have to say, I’m going to share with you.

Heather Taylor:

I love that adding your because I can see how it brings about safety because of essentially, there can’t be a wrong answer. Right? Unless you are conflicted and not being open. And even if you’re conflicted and not being open, and so give a technically incorrect answer, nobody else knows. Only you know. And actually, that’s I think that’s learning something, isn’t it? You know, sometimes when someone might tell you something about yourself and you go, that’s not what I’m like. And then you have a think about it later and you go, oh, you know? And it’s sort of that level of self reflection though in answering these questions, the meaning of your life. And I do think that’s really important as well that you’re not just asking them questions because of you have to, you know, because I know you do this in your module, your final year module as well. You’re not asking them questions because you have to or because you have to fill time. You’re asking them questions because you are genuinely interested in the answer. And the more willing they are to explore how they feel rather than just what they think about things, the more even the more interested you become. You know? So, yes, it’s lovely. It’s I think it is a lot of the time missing in education. But again, like I said, it’s not that easy – they’re very at odds with each other, the whole systems and this thing, which is very fluid, you know.

Adhip Rawal:

But I think there’s something, you know I mean, of course, I mean, there’s all sorts of pressures around this. Right? But I think if you observe a student doing this, then you can’t help but prioritize that. You know, that whole when you were asking about the personal element of it, when you see somebody speak to you in that way, you can’t help but say I’m gonna pay attention to you. I don’t need 20 students. I don’t need 40 students. All of that will be relativized because you have one person there in front of you. And that’s enough for you if you begin to see that.

Wendy Garnham (23:27):

It sort of reminds me of one of the quotes that one student said to me recently, which was that they felt heard. And that seems to be at the heart of what you’re saying is that students have that desire to feel heard rather than be one in a large group where they sort of just become quite anonymous in a large group. So I think the importance of that personalised approach is really sort of coming to the fore. I hope it’s coming to the fore. 

Chris Stocking:

Yeah. It’s something I think also that Adhip and I share in common, this respect for reflective practices of sense making through one’s own life and the experiences that they’ve had. And, it just it brings to mind some of the work that I’ve done in the intercultural spaces where the assignments are critical reflections, where the students are then trying to go back through. And they’re going back through. They’re making sense of themselves in different ways. And there’s been some very powerful pieces of work that have been submitted where students who perhaps come from backgrounds – so there was one student who came from – her parents were from one country. She grew up in another country, and then she was studying in the UK. And so she had this persistent question throughout her life as to, you know, which culture she belonged to. We come back to that sense of belonging. And through the module and deconstructing this idea of how we understand culture and how we understand identity, she came to see, and she was able to claim ownership of existing in all of these spaces and none of them as something that was novel, something that was new, a different sense to speak from and to speak into. 

And I think this was very, very powerful to see so many students taking these modules, starting to speak of their experiences, being given space to talk about their experience, about their life, and how they’ve understood it, and the struggles that they’ve had. That sense of being marginalized, that sense of being othered, of trying to fit into certain ascribed criteria that are given by others, and to claim a sense of ownership over who they are and to feel proud of who they are and of how they understand the world and to understand that they matter. And I think that this is something that I was having a discussion about this morning. This sense of belonging from an institutional kind of top down approach to creating conditions of belonging can sometimes be alienating. I found myself alienated by having people tell me how to belong and the conditions for belonging in a certain space. Whereas when we flip this and we start talking about mattering and how a student matters, then they’re starting to come together. They’re starting to collaborate and become together and share with one another and to create the conditions for what it means to belong in that class with themselves rather than it kind of being an imposition or imposed upon them.

Wendy Garnham (26:55):

And I suppose that’s the same for staff as well, isn’t it? Staff in an institution could do the same thing. That sort of coming together, that sort of sharing about our experiences and so on. So, yeah, I think it’s quite a powerful concept.

Adhip Rawal:

Yeah. Can I ask you one more thing about that? Because I think this thing that we’re speaking to, the students’ experience and our experience and why that is valuable. Because I think one of the reasons why it’s also valuable is because that’s not only characterized by difficulty, it’s also characterized by inspiration. And I think the youth, they are very sensitive to inspiration. And how do you navigate the road to the future? You need the inspiration of the youth. So I think we need that. You can’t do it without it. They have they have much more, much richer lives than I have. You know? And my job to listen to what they have to say.

Heather Taylor:

Oh, I agree with you. I think the yeah. I think the, I mean, I’ve never thought of it in these terms before, but, you know, the sort of better I think rapport sounds superficial, but, you know, like, the better sort of connection I mean rapport, but, you know, like, with the better connection you have with students and the easier you find to get them and they get you and, you know, it sort of develops generally over the term. I find that actually, you know, they do sort of teach you things about yourself, but not necessarily explicitly. Sometimes they just pose questions that challenges you, and it makes you really question, you know, like, I know, like, from one lesson, someone asked me a question and they were saying, well, isn’t this thing normal? Don’t you do this? And I went, oh, don’t ask me if as an example of whether people think this is whether this thing is normal or not because it was to do with health. And, actually, after that, I really thought about it, and I was like, I don’t care about my health. Right? And I really recognize that I physical, you know, and I really sort of thought and it bothered me for ages, not the student. The student I appreciated that they’d do you know what I mean? It was a throwaway question to them, and it was a phrasing. They didn’t mean it like you. You know? But it really made me think about it. And I’ve actually been, like, training for a run since then, haven’t I? So it is really cool. And that is that’s transformative because I had crisps for lunch. You know? So I’m still eating crisps at lunch. But is it you know, that, like, that did change me. It did not just that. It fit in with a lot of other things at the same time, but it made me think, and it made me reflect. And I think, yeah, and it’s and it the other thing is when you’re dealing with especially when you’re talking about health, sort of younger people, most of them are much younger than me. They’ve got health most of the time on their side. Do you know what I mean? More so than I have. And when they bring up these sorts of things or when they bring up stuff to do with, you know, the environment or the economy or whatever, it’s much more it’s much more important, I guess, to them because it’s gonna affect them for longer, and it’s affecting them more than it will be me. And it does just make you stop and go, oh, what am I doing? You know?

Chris Stocking:

Yeah. I like what you’re speaking to there. It kind of speaks to me of this slippage of identity positions. So we go into a classroom and we think about this teacher-student relationship. And I think what you’re talking about there is this kind of slippage where suddenly you’re a person. And you’re speaking with people, and that allows for this transformation that perhaps wouldn’t happen if you were to just hold yourself in this teacher and student identity, which is often done so it’s not embodied in the same way. And then when you suddenly slip, something happens, and it breaks the carapace of those carefully cultivated identities, and suddenly there’s that exposure. And you’re there and you’re with people. And there’s that possibility for learning and speaking and doing otherwise that affords for very interesting transformations to come about that you would never have thought that something like this, this kind of learning might take place in that space. And suddenly, just through a conversation, just through a question, you’re changing things about the way that you live in your lifestyle. And I think that’s it’s very, very important that we start to recognize these moments in a classroom and we start to give voice to them and shape them because these are the stories that, at least for me, really interest me and that remind me that here in the university, we are people, we are working with the youth who, as Adhip said, are the ones that are going to be shaping the future. You know? So how do we support them, and how do we allow them to come back to themselves as people in these learning spaces? 

And I think the only way we can do that, and Adhip made this lovely distinction earlier, not learning about life, we’re learning about your life. That suddenly makes it very personable. And then when we’re thinking, it becomes more complex than when we start thinking about global universities and all of the different experiences and framings and positionalities that come into these learning spaces that have to be recognized and spoken to. And the only way we do that in a meaningful sense, I believe, is through hearing one another, through listening to one another’s stories, of understanding one another. And that requires this that that we allow for these personal narratives and histories to be present in the classroom. Because when you start to speak of them and you start to speak to them as, you know, you know, there are three psychologists here in in this this kind of speaking therapies, there’s something very transformative about saying and talking and hearing yourself say words that perhaps you wouldn’t say otherwise. And to hear that repeated back in other ways, to see where it touches and connects with the other person that you’re talking to, to allow for that collaborative becoming as research, as a methodology of research, that collaborative becoming, sharing stories in the space and seeing what comes out of it, and reflecting on that, and learning the tools to speak and listen and be heard, and to allow yourself the luxury of sounding like an idiot.

Wendy Garnham (33:23):

I think it is that opening up. Yeah. Isn’t it? Just opening yourself up. And the more you do that, the more the students seem keen to do it.

Heather Taylor:

Being willing to get things wrong and say I don’t know – I’m always saying I don’t know because I don’t know.

Wendy Garnham:

When technology doesn’t work.

Heather Taylor:

Yeah. When technology doesn’t work. I think also, you know, even sharing sometimes sharing little things about – I’m always sharing things about myself with my students. It’s not even deliberate. It’s sometimes because of something. I don’t mean, like, my PIN number or whatever. I mean, like, you know, just things that little anecdotes that, like, pop into my head. Because for me, it’s made me able to understand something, and you just have a go and see if it helps them. And I do find actually that, you know, I’m not doing it with the intention of getting them to share. Right? I don’t, because I honestly, I don’t mean to do it as it happens. It just happens, But it does help them share. And I think it also helps them recognize they’re allow allowed to give an answer or allowed to give some input that isn’t academic. You’re allowed to give a response that you didn’t get out of a book.

Chris Stocking:

It’s another way of making sense of the learning experience, but through ways of speaking that aren’t themselves inherently academic, and I think that that opens up the possibility of learning, and learning about disciplinary content as well, by speaking about it, by hearing a terminology, not some terminology, not necessarily fully grasping it, but allowing it to trigger moments in your life and to discuss these to bring yourself into the learning environment as a way of navigating and migrating back towards developing a more full, embodied understanding of the concepts, the terms, the theories, the research that you’re reading about, thinking about, challenging. But I think that that’s very important for me is that embodied sense. And the only way it really becomes embodied is if we’re fully present to it. So there’s also some of those discussions that we’ve had. And I remember the last time we spoke, we were talking about just even the constraints of a classroom. And I was talking to a friend last night about this. He’s got a younger daughter who’s 10. And, they were discussing her favourite subject at school, and she said that it was physical education because she can learn while she’s moving. You know? And this idea, I think, is Wendy, you’ve been involved in outdoors learning, so learning while walking, the way that it triggers makes you think differently, have different conversations. So opening up all these possibilities for learning otherwise. Learning in different ways that aren’t necessarily just sitting down and making notes and listening and then talking in in an abstract and somewhat removed sense.

Heather Taylor:

What advice would you give to anyone wanting to make learning transformative?

Adhip Rawal (36:27):

So I think we’ve touched on something just now which I think is important in that, which I think is the recognition that I don’t know. And that sort of recognition that I also don’t know who I am really. Because I think there’s something liberating in that. I mean, the young person, if he hears another, if he hears an educator saying I also don’t know who I am, there’s something I think that that recognition does that allows for this deeper connection to become present. Because I think essentially what we’re saying is that the transformation doesn’t require the input of money. It doesn’t necessarily require the input of knowledge as such. Maybe certain knowledge can help you keep that, sort of keep transformation in vicinity, but essentially it’s about relational support. And I think also that flow of recognition that what that transformation will be, we don’t know. I don’t know. What would you say, Chris, about that?

Chris Stocking:

Yeah. I mean, it’s a very difficult question because you know, the first thing that, you know, when we were thinking about this podcast and I was making notes was that, you know, we’re not really sure at this stage what transformative learning really is. And I think that probably this discussion has revealed somewhat that we’re still trying to understand what it is that we’re talking about and how we make sense of it and how we bring that into a more formal academic setting. But relationality, I think, is definitely important. And I think that the sort of the peeling away of layers to understand again and to remind ourselves that from an educational perspective and as educators that they are people before us. They aren’t students. They are people who have incredibly rich, vibrant stories, experiences, inspiring, difficult, and all of that is always present in any shared space. And that we should never make assumptions about how somebody is responding in a classroom. And that we should afford for the opportunity for different kinds of discussions, different kinds of conversations to emerge. And if they open up organically and naturally to work with that as a learning substance and material to feed and fold ourselves into to support the learning and to support that sense of being present with one another.

Adhip Rawal:

Yeah. Yeah. So then maybe what one thing that we’re saying is that any kind of transformation requires communication. Right? And it requires us to be present to the communication, to the full spectrum of communication. So, essentially, that’s a function of attention. You know? The depth of my attention might be very important and, you know, somehow, facilitating this process from coming about.

Chris Stocking:

I think Wendy had mentioned that, being heard, some feedback that you’d heard from students, that idea of being heard, I think, is also very important. Because when you afford that space for somebody to speak and you don’t allow them you don’t afford this space of them speaking, waiting to navigate the conversation back where you want it to go, but you respond to whatever it is that’s being said in a meaningful way. I think that is very important that people feel heard, that they feel visible, that they have a voice that matters, that matters to you as an educator, as Adhip said earlier. And I think that, yes, that that being present, that capacity to listen, tenderly and in a way that is actively involved in caring for the cultivation of another human being.

Heather Taylor:

I would like to thank our guests, Adhip Rawal and Chris Stocking.

Adhip Rawal:

Thank you.

Chris Stocking:

Thank you.

Heather Taylor:

And thank you for listening. Goodbye. 

This has been the Learning Matters podcast from the University of Sussex, created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes, as well as articles, blogs, case studies, and infographics, please visit blogs. Sussex.ac.uk/learning-matters.

Tagged with: ,
Posted in Podcast

About this blog

Learning Matters provides a space for multiple and diverse forms of writing about teaching and learning at Sussex. We welcome contributions from staff as well as external collaborators. All submissions are assigned to a reviewer who will get in touch to discuss next steps. Find out more on our About page.

Please note that blog posts reflect the information and perspectives at the time of publication.

Subscribe via Email

Enter your email address to receive notifications of new posts.