Episode 13: Generative AI and Higher Education

Our guests: Dr Paul Robert Gilbert and Dr Niel De Beaudrap, with producer Simon Overton and hosts Dr Heather Taylor and Prof Wendy Garnham.

The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our thirteenth episode is ‘generative AI and higher education’ and we hear from Dr Paul Robert Gilbert (Reader in Development, Justice and Inequality) and Dr Niel De Beaudrap (Assistant Professor in Theoretical Computer Science). 

Recording

Listen to episode 13 on Spotify

Transcript

Wendy Garnham: Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Generative AI and Higher Education. Our guests are Dr Paul Gilbert, Reader in Development, Justice, and Inequality in Anthropology and Dr Niel De Beaudrap, Assistant Professor in Theoretical Computer Science in Informatics. Our names are Wendy Garnham and Heather Taylor and we are your presenters today. Welcome everyone. 

All:  Hello. 

Heather Taylor: Paul, could you start by telling us a little about what you teach and how you see generative AI changing the way students approach your teaching? 

Paul Gilbert: So, I’m based in Anthropology, but I actually primarily teach in International Development. And my teaching is divided into three areas, which sort of reflects my research interests. I teach a secondyear module on Critical Approaches to Development Economics. I teach a thirdyear module called Education, Justice, and Liberation, and I’m the coconvener with Will Locke of the new BA Climate Justice, Sustainability and Development. I also do some teaching from first year on Climate Justice. So, honestly, in terms of how students are approaching the teaching, apart from a couple of cases where I suspect there may have been AI involvement in some assignments, I haven’t noticed a huge difference. 

What is new, and maybe why I haven’t noticed that difference, is that, like a lot of my colleagues in International Development, we started engaging headon with AI and integrating it into what we teach about and how we talk to our students. For example, my secondyear development economics course is deliberately structured around global South perspectives on development economics and thinking that isn’t dominant. Economics is odd among the Social Sciences. There’s a lot of research on this. Sociologists like Marion Fourcade have looked at the relative closure of economics, the hierarchy of citations, its isolation from other disciplines, and the concentration of EuroAmerican scholarship. Professor Andy McKay in the Business School has also done work on the underrepresentation of African scholars in studies of African economic development, and on the tendency for scholars from the global South to be rejected disproportionately from global North journals.  And so the reason that matters for thinking about teaching and AI is because AI, so large language models. Right? ChatGPT and Claude and everything, they’re basically fancy predictive text. Right? Emily Bender calls them synthetic text extruders. Right? You put some goop in and it sprays something out that sometimes looks like sensible language. 

But it does that based on its training corpus, and its training corpus is scraped from somewhere on the Internet. And it also has various likelihood functions within the large language model that make sure the most probable next word in the sentence comes, right, so that it seems sensible. And what that does is reproduce the most probable answer to an economic question, which is the most dominant one, which happens to be not the only one, but one from a very, very narrow school of thought that has come to dominate economics and popular economics and so on. And so all the kind of, minoritised perspectives, the ones that don’t make it into these, like, extremely hierarchically structured top tier journals, the ones that aren’t produced by Euro American scholars, you’re not going to get AI answering questions about them. And if you use it to do literature searches, it’s not going to tell you about them. 

So, that module is built around those perspectives. And so we kind of integrate that into the teaching as a way to highlight to students that large language models function in a way that effectively perpetuates epistemicide. Right? By making sure that it reproduces the most likely set of ideas and the most likely set is always the dominant set, right, from whoever is most networked and most online, then perspectives that have already been disproportionately not listened to get less and less visible. So structuring the course around precisely those not so visible but really important and really consequential approaches to economic development forces students not only to not use it to do to do their literature searches, but to think about the kind of the politics of AI knowledge production. 

Heather Taylor (04:57): That’s fantastic. I mean, we have, so we both teach in psychology, and we’ve got a problem in psychology where it’s all very Western dominated. You know, we use the DSM 5 largely, which is American, as like the manual for diagnosing, you know, psychiatric disorders and so on. I teach clinical mainly, but, you know, it’s generally, there’s a Western dominance.  And that would actually be a really interesting way to get them to think about where are these other perspectives and how like you’re saying, there’s sort of a moral question of using ChatGPT because it’s regurgitating stuff that’s overshadowing, you know, other important voices that aren’t being heard. 

Paul Gilbert: Yeah. And it’s also an opportunity to get students to think more about data infrastructures. Right? Because whether you use ChatGPT or Claude or Lama or whatever or not, I think we all could do with understanding a bit better how search works, how data infrastructures are put together, where things are findable or not. Right? 

So centring that means there’s a chance to reflect on that. And it’s not, you know, I’m not a big fan of AI for various reasons. We might go into it later. But rather than just standing up and going, this is awful, you can say, look, here are all these perspectives that are really important. We’re going to get to grips with them in this course, this is not what you will get if you ask ChatGPT for an answer. And it’s also part of a way of reminding students that they are smarter than ChatGPT, right? And I think that’s really important because there is some research, I think some people in Monash University in Australia and elsewhere looking at the kind of demoralising effects of people, so I think what’s the point of even trying? Right? 

I can do this and it can spew out something as good as what I can do. And it can’t. Right? And it can’t if you structure your questions right and if you get your students to think about its limitations. I don’t want to talk too much, but, just one thing that’s worth saying, and, Simon knows about this as well because he did some videos for us, but one of the ways that I try and get this across to the students is that we have this big, special collection in the library, the British Library for Development Studies legacy collection, and it’s full of printed material produced by scholars from Africa, Asia, Latin America in the sixties, seventies, eighties. A lot of it doesn’t exist in digital form. Right? It hasn’t been published as an eBook. You might find a scan somewhere.  Right? Some of that knowledge doesn’t even turn up in the Sussex Library Search. Right? But it’s there in the stacks or in the folders and, you know, there’s a huge amount of insight and history captured in there that the Internet doesn’t know about. Right? Which means ChatGPT will never know about. And, you know, yet that reminds us that there are limitations even to Google searches and Google Scholar and everything, but, you know, the problems of LLMs are that on steroids. Right? So using that as a chance to show people why offline materials and stuff that isn’t widely disseminated or available open online isn’t, you know, to be discounted, and there’s a lot of valuable knowledge in there. 

Heather Taylor (08:07): So same question to you then, Niel. Could you start by telling us a bit about what you teach and how, sorry, how you see generative AI changing the way students approach your teaching? 

Niel De Beaudrap: Alright. Well, I teach a couple of modules in informatics. And informatics, for those of you who don’t know this term, it’s another term for computer science. But, I end up teaching the introductory mathematics module for computer science where we try to introduce to them all the basic mathematical concepts. That’s the name of the module, Mathematical Concepts, that they might need throughout their career.  And we’re not going to cover absolutely everything when we do that, but we try to cover a lot of ground with a lot of different ideas and how they connect to one another. And I also teach a master’s module in my own research specialty, which is quantum computation. And while these two things might seem a little bit different from one another, the fact that quantum computation, so setting aside sort of any excitement that might come with that, it’s something which you can only really come to grips with if you have a handle on a large collection of mathematical concepts. So, similarly to Paul, I don’t really know precisely how it would affect how they engage with either of these modules. I see some signs possibly for, what my modules were, the assessments that maybe, you know, the strange inconsistency sometimes with which the students, not only answer questions, but whether they are able to answer a question well or not, even if they’re very similar questions, sometimes has me scratching my head, but Idon’t try to infer anything about that. 

But it’s sometimes signs about, you know, basically, when I wonder whether or not they have used a large language model in order to generate answers, it’s a question of basically how they are trying to engage with the subject in general. And, you know, if they’re relying on other tools to try to, basically learn about the subject matter. It’s very good if students try to use other materials, other sources from which to learn about a subject. But there’s the question of, you know, how will they can sort of judge the materials that they choose to learn from. 

I try to curate the approach in which, as I, you know, all teachers do, all lecturers do, try to curate a particular perspective, a particular approach from which they can learn about a subject. And if they use alternative sources, this is good. You know, a very good student can do that or somebody who might happen to find my explanation a bit puzzling. It’s much better that they look for other solutions, other approaches to learn than just sort of struggle along with my strange way of looking at things. But, ultimately, it does require that they be able to sort of critically evaluate what it is that they’ve been shown. You know, if they do use AI, if they do use a large language model and its explanations, first of all, they’re going to come across basically the popular presentation, which not only might be biased, but for a complicated subject might simply be wrong. It might lean very heavily on very, very reductive, very simplified presentations that you might see, for instance, in a popular science magazine, which is the bane of quite a few of my colleagues and me in particular. The fact that these things get collapsed, the fact that they get just flattened down into something where you have something that sounds like a plausible sound of words, a plausible thing that somebody can say but actually has no explanatory power. Not only is it sort of not the full answer, it’s that you can’t use it in order to understand it, but the students, whether or not they can recognize something like that, is something that I worry about, for mathematical concepts of the more elementary stuff, the more elementary module that I teach. So it’s possible that they might use a large language model to try to generate examples of something or another, but there’s the question of how reliably the large language model is generating things. Like, I actually haven’t kept on track of just how good language models are doing arithmetic, but if they can’t reliably sort of do any mathematics, then how are they going to learn anything from the large language model, let alone anything which has got more complicated structure, from which, you know, they’re trying to learn this slightly nuanced thing, even if it’s at the most elementary levels. The only way that you really get to grips with these techniques is by encountering it yourself, and, you know, so a large language model maybe, is going to be a little bit more successful at generating elementary examples of things because there are certainly mathematics textbooks aplenty, many examples from which you can draw some sort of corpus of how you explain a basic concept, but if they’re being sort of chopped and changed, how do you know that you’re going to get some sort of a consistent answer? This is the thing that I’m concerned with. I don’t know precisely how often it comes up for my students, but I do hear them just sort of casually referring that they will look up something by asking ChatGPT. It makes me wonder what the quality of information that they’re getting out of it is. 

Wendy Garnham (13:11): I suppose that’s similar to the Coding in R that we see in psychology students, so they will often resort to using ChatGPT, and quite often it gets it wrong.   

Heather Taylor: And then they get flagged. They’re very good at flagging it in the research.  The research methods team are very good at identifying when AI did it, basically. But, yeah, I’m assuming because it just does things a weird way. You know? Yeah. 

Paul Gilbert: LLMs are astonishingly bad arithmetic. Like, almost amusingly bad. I think they can kind of cope up to, like, two, three figures and then you start, because it’s again, it’s predictive text. It can’t do maths. And I think one of the, like, really important things about bringing this into the classroom is, you know, part of what we mentioned earlier about, you know, students might lose confidence or lose motivation because they think, oh, ChatGPT can do it, I’ll just look it up on there. But there’s a lot of things that it’s super dumb at. Right? And I don’t think we should shy away from saying it’s really dumb at a lot of things, and we shouldn’t rely on it, and think about why that is. And it’s because something that is good at predicting the likely next word based on the specific set of words that it was trained on can’t do a novel mathematical problem, and there’s no reason why you would think it should. And there’s so much hype that equates large language models with human cognition that people seem willing to accept it can do a whole bunch of things that it can’t. Right? Even down to really trivial, sort of slips, like assuming it’s a search engine. Right? But it’s stuck in time. It can only answer things in relation to its training data. It’s not something that can actually search current affairs. Right? And that is something that is kind of surprising both how some students and some colleagues aren’t fully aware of that, which I think needs to be, like, the super basic minimal starting point is that people need to understand the data infrastructures that they’re messing with and what they can actually do and not do. 

Heather Taylor (15:18): You know, it’s really agreeable as well. So it depends on how students phrase questions or if they phrase it almost, is this right? Or, you know, because if I told it, I asked it to tell me that -this shows my boredom. But I asked it to tell me what the date was. Let’s say, for example, it was the June 19, and it told me, and I said, no, it’s not, it’s the June 20. And it said, oh, I’m so sorry. You’re right, it’s the June 20. I said, no. It’s not. It’s the June 19. Why are you lying to me?  And I asked it, why is it so agreeable? And it was like, this is not my intention. You know? And it’s like, it’s really, this is a thing. You can tell it. You can feed it nonsense, and then it will go, yes, that nonsense is perfect. Thank you. You know? 

Paul Gilbert: Yeah. But this sorry. Like, to go back to I mentioned earlier Emily Bender, the linguist who writes a lot about large language models. And something that she says I think is really important is they don’t create meaning. If you impute meaning from what they create, that’s on you  Right? But these are synthetic text extruders that produce statistically likely strings of words. 

Heather Taylor: So if I’m saying something, you go, no, that’s probable then.  

Niel De Beaudrap: It doesn’t sound obviously wrong.  

Paul Gilbert: But if you take that meaning as meaning that it can think, that’s on you. Right? It’s just a thing that responds to prompts and spews out words and it looks like it can think but it can’t. 

Niel De Beaudrap: That’s been going on for ages as well from the earliest chat bots and like back to the 1960s, Eliza, where a lot of people were convinced right from the very beginning that they were talking to somebody who was real or that something that was actually thinking, this might have been partly because of the mystique of computers in general. You know, the mystique of computers changes, but people still remain impressed by computers in a way which is slightly unfortunate, I think. Like, they’re very useful devices. Speaking as a computer scientist, I mean, I enjoy thinking about what a computer can help me to do, but it’s good to have an idea of what is realistic to expect from them. And people have often sort of been looking for it to be something with which they can talk and even something that was just responding a sort of to a formula, something that in the 1980s everybody could have on their personal computer, this is not something which is very deep. 

People are looking constantly for something to be relating back to it as a person. So you always have that sort of risk, but yeah, if it’s not actually accessing any information, if it’s not using the information in any particular way that could actually produce novelty and for which you can have confidence, that that novelty might mean something because it’s in some quarters correspondence with a model informed by the actual world around it. You always have that risk of people imputing upon it a depth, a meaning, and a usefulness that is far beyond what it actually has. 

Wendy Garnham (18:10): I think we’ve touched on this a little bit, but Paul, do you think generative AI is affecting how students value subject expertise? And if so, in what way and what impact does it have? 

Paul Gilbert: It’s a really good question and it’s quite a hard one to answer. I think, you know, you can imagine a risk where people think, oh, what’s the point in having a specialist because I can just ask – it can tell me everything. Right? But we know it can’t, and I think a lot of students are pretty switched on to that. And, again, I think this is why it’s important to embed some of that, like, critical data literacy and critical AI literacy into the classroom. Just to pick up on something Niel said about whether or not these large language models can produce novelty, produce new knowledge that is meaningful, I think it’s also worth thinking a bit more deeply about what we mean by subject expertise, which isn’t just having access to loads of references and being able to regurgitate stuff. Right? Leaving aside the fact that, large language models often get references wrong and make them up, right? Let’s just pretend they can do that. 

That’s still not what subject expertise is. Right? And a lot of it is about developing certain styles of thinking, certain critical capacities, abilities to see connections, right, in the social sciences. In in one way or another, a lot of disciplines talk about the sociological imagination or the ethnographic imagination or the geographical imagination. And it’s about a certain way of thinking and making connections. And having a kind of imaginative capacity that comes along with subject expertise, I think, is really important. There’s a bunch of work that’s been done by some people in Brisbane, in Australia, where they have queried a whole bunch of different chatbots and different generations of the same chatbot and so on, to ask about ecological restoration and climate change. And aside from the stuff that by now, I think, hopefully, most of us know about these large language models, that it returns, like, 80% of the results are written by Americans and white Europeans, that it ignores results from a lot of countries in the global south that do have a lot of work on ecological restoration, all these kind of, things which we call biases, but I see sort of structural inequities in the way these models are trained. I think one of the most interesting things they found, and again, it makes sense based on what we know about these models, is that they never tell you about anything radical. Because again, it’s a backwards looking probabilistic thing. Right? What’s the best way to deal with this problem? Okay. Well, it’s going to give you something from its training corpus, which is probably based on the policies that have been done in the past. Except if we, you know, we are facing a world of, runaway climate change and things that have been done in the past are not the things we need to keep doing. Right? And it just would not answer about things like degrowth or agroforestry or anything that, you know, I wouldn’t even think of agroforestry as particularly radical, but, you know, it’s not mainstream. Right  It’s not been done a lot before. And so they just don’t want to talk about it. Right? And having the capacity to look at a problem, know something about what happened in the past, think about what the world needs, and be creative, be innovative, and have an imagination is something that a lot of our students at Sussex really have that capacity, and that absolutely cannot be replaced by a large language model. Right? And I think, as well as, emphasizing that ourselves, we need to encourage them to become aware of that capacity in themselves. Right? That, you know, you might feel overwhelmed or demotivated or you like, you want to ask ChatGPT or Claude or whoever, but, like, both what we are trying to, get across as subject expertise and what we want them to leave with massively exceeds anything that this kind of very poor approximation of intelligence can offer. 

Wendy Garnham: Yeah. It sounds as though there’s a big role to play for like active learning, innovation, creativity in terms of how we’re assessing students and how we’re getting them to engage with this subject material I guess. So that’s music to my ears. 

Heather Taylor (22:22): Also in the same respect as that, we’re not meant to be information machines, you know? And I think if a student came to uni hoping to meet a bunch of information machines, well, they’d be wrong. Hopefully, not disappointed because it’s better than that. But, you know, I think also that, you know, teachers have the ability to say when they don’t know something and to present questions that can help them and the students try and start to figure out an answer to something. And, you know, I really love it actually when I’ll make a make a point or an argument in a workshop. 

I had this last year with one of my well, last time, one of my foundation students, and I was very pleased with my argument I put forward. I was showing them about how you make a novel but evidence based argument, you know, so where you take pieces of information, evidence from all over the place to come up with a new conclusion. And I was very pleased with this. Anyway, the student of mine, she was brilliant. She rebutted my argument, and it was so much better than mine. Right? It really was, and it was great. And that’s sort of as a teacher, that’s what you want to see happen. And I think with things like ChatGPT and any of these AI things, they’re not going to do that. They’re not going to encourage that, and they’re not going to know how. You know? They’re not there for aren’t asking questions. They ask you questions to clarify what you’re asking them to do, but that’s it. And I think, yeah, I completely agree with you. Students can get so much more out of their education, you know, by recognising that they’re so much more than information holders.  You know? 

Wendy Garnham: So same question to you, Niel. Do you think generative AI is affecting how students value subject expertise? And if so, in what way and what impact does it have? 

Niel De Beaudrap (24:15): I think it does affect how students value subject expertise. This is something that I see when assessing final year projects or even just seeing what sort of project students propose for their final year project, where in computer science, as you can imagine, a lot of students are proposing things that involve creating some sort of model, not a large language model, but, you know, something that’s going to use AI to solve this and that, where they seem to have a slightly unrealistic expectation of how far they’d be able to get using their own AI model, and in particular, where they’re contrasting this to the effort that’s required in order to solve things with subject expertise, where they seem to think that this is something which is easily going to at least be comparable to, if not match or surpass the things that can come from people who’ve spent a lot of time thinking about a particular situation, a particular subject matter. They think that a machine that’s just by crunching through numbers quickly enough is going to be able to surpass that. And, you know, they, of course, learn otherwise to a greater or lesser extent, basically to a greater or lesser extent that they notice that they have not actually met their objectives, that what they hoped to be able to achieve. The fact – it’s more the fact that they have that aspiration in the first place. Now I mean part of it of course again in computer science, there’s going to be a degree of neophilia. They’re enthusiastic about computers and why not. They’re enthusiastic about new things that are coming about computers and why not. It’ll be just a matter of the learning process itself that maybe some of these things aren’t quite all that they’re hyped up to be. But that’s sort of where they’re starting up from, this idea that just sheer technology can somehow surpass careful consideration. I find that a little bit worrying. 

Heather Taylor: 
Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners? 

Niel De Beaudrap: 
I’m not actually an education expert, so I can’t really say whether it’s likely that there are ways that generative AI can help, come up with ways, you know, to help student learning. I have seen examples where, a native Chinese speaker was trying to use this to translate my notes into Chinese. So, okay, there might be some application along those lines, just trying to find ways of, translating natural language whether we know, you know, a generative AI such as we’ve been thinking about them now to do that well, I’m not in a position to say. Conceivably, as I sort of thought about before, maybe they can be used to come up with, examples of toy problems or, simple examples. I guess you could say supplementing the course materials in order to try to come to grips with some particular subject. 

 
That’s something that I can imagine one might try to use generative AI where it’s possible that things won’t go very badly wrong. Apart from that, I guess I’m sort of, viewing things through the framing of the subjects that I teach in my own particular interests, which basically involve the interconnectedness of a lot of technical ideas and ones that I find fascinating, so I’m going to be extra biased about that sort of thing. About, you know, learning about various ways that you can understand and measure things and structure them in order to get a idea of a bigger whole of how you can solve problems. And you can’t solve problems without being able to have the tools at hand to solve the problems. Okay. Some people might say, okay, maybe an AI can be such a tool, but before you can rely on a tool, you have to know how to use it well. You have to know how it’s working. You have to know what the tool is good at. So even if you want to say, shouldn’t this just be one tool among others, I don’t see a lot of evidence of, first of all, where it is a reliable tool. The thing that you absolutely have tohave of a tool is that you can rely on it to do the job that you would like it to do. Otherwise, I mean, you can use a piece of rope as a cane, but it’s not going to be very helpful to you. Maybe you just need to add enough starch or maybe you should look for something other than a piece of rope. And this just builds upon itself. The way that you build expertise, the way that you can become particularly good at something is by spending a lot of time thinking about the connections between things, by looking, asking yourself questions about, you know, what does this have to do with that? You know, is that thing that people often say really true? Only by sort of really engaging yourself in something like this can you really make progress on that and really become particularly good at something. And by sort of devolving a certain amount of that process okay. Again, there’s a reason why I’m in computer science. There are some things, like summing large columns of numbers. Is it possible that we have lost something by not asking people to systematically sum large columns of numbers? 

Have we lost something important about the human experience or maybe just the management experience by devolving that to computers? Well, there will be some tasks, maybe, where it is useful to have a computer, not speaking of LLMs, but computers in general, having them solve problems rather than having us spending every waking moment doing lots of sums or doing something more or less tedious. There will be some point at which the trade off is no longer worth paying. And I believe that trade off, you know, happens well below the point where you are trying to really come to grips with a subject and with learning. So the only thing that one really wants to have a lot of for learning is a lot of different ways of trying to see something, a lot of different examples, a lot of ways of trying to approach a difficult topic. And beyond that, cooperating with others, that’s a different form of engagement. It’s a way of swapping information with somebody, ideally, people who are also similarly engaged with the subject. It doesn’t help, of course, for you to ask somebody who is the best in your class and then just take their answer. That’s the same sort of problem that one has with LLMs, is relying on something else without engaging on the subject yourself. The most important thing is to sort of try to draw the line at a point where the students are consistently engaging with the learning themselves with the difficult subjects. And if the things that we, if the resources that they usually have to hand aren’t quite enough, well that’s not necessarily something that you solve with LLMs. You can solve that problem by providing more resources generally, and that’s a larger structural problem in society, I think, that maybe can be drawn on, but that’s not quite about what LLMs are about. 

Wendy Garnham (31:07): It sort of sounds a little bit like we’re saying that the purpose of education is changing so that it’s more about encouraging or supporting students to be creative problem solvers or innovative problem solvers. Would you say that is where we’re heading? 

Niel De Beaudrap: I don’t know if I can say where we are actually heading. I think, obviously, it’s good to be a good creative problem solver. And there has always been, I guess there’s the question of whether or not originally we spent more time doing things by rote. You know, you solve an integral and you don’t ask why you’re solving the integral. We’ve had tools to help people with things like, you know, complex calculations, a little, you know, slightly annoying picky fiddly details for a long time. Like so for example, a very, very miniature example of the same thing that we have with LLMs in mathematics is the graphing calculator, where you had something that was a miniature computer. It wasn’t wouldn’t break the bank, although it’s not as though everybody could afford it. But you could punch into it, ask it to solve a derivative, to solve an integral, to plot a graph. All sorts of things that, once upon a time were solely the domain of people, like before the 1950s and that, you know, particularly how to sketch a graph was something that was actually taught. Here are the skills, here are the ways that you can not precisely draw what the graph is like, but to have a good idea what it’s like, it would give you a good qualitative understanding. And even now, even though I do not draw very many graphs myself, the fact that I was taught that means that I have certain intuitions about functions that maybe somebody who hasn’t been taught how to do that, wouldn’t have. Now does that mean that I think that we should, you know, basically everybody should always be doing everything all the time with pencil and paper? No. There will be some sort of trade offs. 

It’s a question, you know, with the creative problem solving. It’s good to have people spend more of their time and energy trying to solve problems creatively, to think about things, engage with them, rather than being constantly boiled down, you know, caught up in doing things by rote. Now as far as the creative part goes, well you know, if you put the students in a situation where they are tempted to put the creative part into basically the hands of something which or the digital hands, the metaphorical hands of a text generator, then it’s not going to teach them how to engage with their subject in a way that they could deal with it creatively. It’s like most things that you have to know what the rules of the game are before you can interpret them well or before you can know where you can break them. If true creation is coming up with something which hasn’t been seen before where you realize actually we can do things differently and, of course, in mathematics and computer science, you’re concerned also with whether it’s technically correct. 

The rules that we have in place aren’t in place because it is the only correct way to do things. It is because the way that we can do things that we are confident will work out. That doesn’t mean that is the only way that things can be done. But you can only see these things if you know how the system was set up in the first place. If you know how to work with the existing tools that we have through the mastery of them, can you figure out how you can do things differently in a way that will work well? And if you always basically devolve all the hard stuff, the so called hard stuff, to a computer that’s not, of course, not going to tell you anything new, certainly not if it accidentally spits out something at random that, is sort of a novel random combination of symbols, it won’t be able to tell you why it should work in any way that you can be confident of. You need to be able to have the engagement with the subject itself in order to even recognize anything that could be an accidental novelty. 

Heather Taylor (34:59): Same question to you then, Paul. Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners? 

Paul Gilbert: So my instinctive answer is, honestly, no. I don’t think there are benefits. Right? Maybe there are, but I haven’t been shown them yet. Right? 

And I think the reason I want to say that is because there is so much hype and so much of a framing of inevitability about these discussions that frequently people are saying, well, if we haven’t figured out how they’re going to improve student learning, don’t worry, it’ll come. Right? And we’ll know. If you’re going to change pedagogy and change the curriculum and change how we teach, show us first how it’s better. Right?  I think there’s a reasonable way around to do that. Tech journalists, I think it’s Edward and Waiso, or possibly another journalist, talks about the way people tend to treat LLMs as stillborn gods. In other words, these are like perfect all powerful things that just aren’t quite there yet, right? So if you just hang on, they’ll improve our learning. Right? And we saw this in the discussions we’ve had at the university that the Russell Group principles on AI make a whole series of claims about how AI can improve student learning without a single example or footnote or reference. Right? And yet we can find pedagogical research showing that it can actually undermine confidence, undermine motivation, all these kinds of things. Right? So, until people can show me that it’s good, my instinct is to say no. 

And I know, you know, to dig into that a little bit deeper, I know there are claims made about how it can be important for assistive technologies. And it might be that that’s true in certain cases. But there’s equally evidence that it can be disabling. And there are some colleagues in Global, Lara Coleman and Stephanie Orman, working on this. And part of this is also about the what it means to learn.  

So something I sometimes go through with my students when I’m trying to explain what we want from them, why giving example after example after example after example won’t get you a first.  And I show them that Bloom’s Taxonomy Triangle. I don’t know if you come across that in teacher training and stuff, where it starts at the bottom, the bottom layer is like recall. You show you evidencethis learning by just reproducing something you’ve memorised, and it goes on to understanding.  You show, you know, what it means, how it works, and then you kind of apply your knowledge to new situations, draw connections between it, evaluate it, create something new. Right? And so this is a useful tool to show, like, if you keep just building loads of the bottom layer with more examples, you’re going to not get higher than a 2:2. Right?  

Heather Taylor: That’s very good to know, by the way.  

Paul Gilbert: And equally, if you go straight to, like, I’m going to create something new at the top, but you haven’t built a foundation of knowledge, then you’ve got a rubbish pyramid that will fall over. 

Heather Taylor: What’s that called again? 

Paul Gilbert: Bloom’s Taxonomy. It’s now quite old, like, constructivist pedagogy example. 

Heather Taylor: It’s very useful, though. 

Wendy Garnham: Put it into ChatGPT. 

Heather Taylor: Yeah. I like it –  ‘You’re just at the bottom, mate’. 

Paul Gilbert (38:16): But it kind of it’s also useful to show that just learning things and regurgitating them is not what we’re looking for. That’s not the kind of higher processes of learning. Right? And I think if I’ve been a bit disappointed to see people embrace the idea that, LLMs can be really useful in, creating study guides or creating summaries of articles and so on. So, well, because what you’redoing there is you’re outsourcing the process of creating the understanding, creating the knowledge object that moves you up the pyramid. And then if instead of understanding a text, you asked ChatGPT to summarise it for you, you’ve essentially just stuck yourself in the remember and regurgitate bit lower down because you’ve made it shorter, but you’ve not done that work to generate the understanding and the applications and make those connections with other readings you’ve had and the kind of things Niel was talking about. And so I’m really cautious of a lot of the hype and inevitability framing around the idea these are going to be great for assistive technologies. They’re going to improve people’s learning and stuff. If that’s the case, evidence it before spending loads of money and reshaping higher education around it. And I think, you know, something else I wanted to add on to that is, you know, you were also talking a bit about, there are potentially maybe cases where LLMs could be useful. Right? You could create little problems or you know, okay. Sure. And there was an article recently in the Teaching and Learning Anthropology journal that sort of made this case. And superficially, when you start reading it, you think, okay, they’re talking about something similar to what we’re talking about here. Right? They’re saying, we want to reject this model of education that’s just regurgitating facts. Right? I think we’re all on that page. And ChatGPT is an opportunity to get students to interrogate ideas, to ask what’s wrong with things, to engage in dialogue. And they even use the language of, the Brazilian educator Paulo Freire to criticise the banking model, right, where you just pour ideas into the passive student’s head and they vomit it out. Right? None of us want to be doing that. But what kind of got me a bit frustrated with that paper is the other half of what Paulo Freire was talking about – banking education is bad. What you want is critical pedagogy, where people come to an understanding of their place in the world and the structures that create oppression and unfreedom so they can use that knowledge to make a better world to achieve liberation. Right? And it genuinely boggles my mind that people will invoke Freire in a discussion of ChatGPT and not mention the political economy of AI. 

Right? It’s deeply frustrating that there’s this sort of sense that, oh, well, it’s incidental to this whole discussion about education that we’re talking about some of the greatest concentration of wealth and power in history. Right? That is also premised on an absolutely insane expansion of energy and water usage. And it’s hardwired into the model. Because of this understanding that these models are, you know, stillborn gods and they’re going to be perfect when we get them right, that’s the justification for Zuckerberg, Sam Altman, all of them. Their model is all about scale, more and more, bigger and bigger, more data centres, more, in video processing units. And that means more energy usage, and it means more water to cool those.  

And so I think last year, there was a study that showed worldwide AI data centre usage emitted the same amount of carbon as Brazil, right, which is a big agro industry emitter. There have also been studies from UC Riverside suggesting that by 2 years’ time, the worldwide data centre freshwater usage, to call it, will be about half of UK water usage. Right? And that’s concentrated in certain areas, typically in low income areas. And all of those studies were done before over the last few weeks. Zuckerberg’s announced 23,000,000,000 investment to expand data centres. OpenAI is trying to spend 500,000,000,000 building data centres, mostly running off fossil fuels. Right? Mostly located in low income, often water stressed environments. Right? So if that is the pathway to finally getting it right so that this model works, right, and, oh, yeah, we’ll iron out those hallucinations and it won’t give you fake references anymore, genuinely, what is wrong with you that you think that is a good pathway to an educational future? Right? Like, I don’t understand it. And Sussex has these sort of ambitions to be one of the world’s most sustainable universities and everything, and you can’t bracket that and pretend like it’s not applicable to engagement with technologies that have this political economy, that have this political ecology. 

And that political economy follows exactly from the claims about what they can do. Right? All of the claims about their magical power is based on scale. Right? These things are powerful because we’ve trained them on more data than you can possibly imagine. And we have more data centres, powering the models that are responding to more queries than ever that, you know, you couldn’t even imagine the scale of it. Right. Great. That pathway to refining these models is essentially ecocidal. Right? This is before you even get into the labour stuff and the fact that, you know, I really dislike the language of the Cloud because it implies a virtualism. Oh, The Cloud. Right? The Cloud is made of copper and plastic and lithium and silicon and stuff that is ripped out of the Earth.  

Niel De Beaudrap (44:02): So It invites you to think about it in an extremely vague way. 

Paul Gilbert: Yeah. Right. Whereas, actually, what it is data centres largely in low income water stressed communities. Right? The same day that, last week, there was an FT leader, Zuckerberg, seeking 23,000,000,000 from private equity to expand his data centre. He’s just, there’s a story on a tech journalist web website 4 0 4 Media on the same day, but a community in Louisiana, one of the poorest towns in the state, that is going to have their utility bills go through the roof because a bunch of data centres are being built by OpenAI, which require construction new gas plants, and all those costs have to be paid for by someone, and it’s probably not going to be OpenAI. Right? Yeah. That is the sort of backstage on which these magical LLMs are unfolding. Right? And I think if you’re willing to have a discussion about the educational benefit of these tools without situating them in that political economy, that political ecology, you know, certainly, as someone who works in a kind of development studies department, that’s like a massive, you know, intellectual and moral failing.  

Heather Taylor: So, essentially, even if so, I mean, like you were saying, it’s all hypotheticals about whether eventually they can get AI to be magic. And there’s also obviously lots of, even before you go thinking about the environmental implications, there’s lots of implications of if you could make AI that perfect, what’s going to happen to people’s jobs and, you know, there’s all that side of things as well. But so, essentially, even if, this is a question, even if AI were to be magic eventually, yeah, and do everything that they want it to do or that we theoretically want it to do, I have no idea what I want it to do, The cost would be the world burning. 

Paul Gilbert: Yeah, some people don’t want to have serious discussions about that. That’s fine. But then just, you know, you’re not a serious person, I guess. 

Heather Taylor: I didn’t – I knew that you had environmental – I honestly, this is like this shows my ignorance, really. But I knew that there was environmental consequences to AI. I did not know it was this deep or where the consequences were being worst felt, which is horrible. 

Paul Gilbert: Yeah. And, you know, it’s utterly bizarre that a lot of data centres are being built in some of the most arid parts of The US, which are already water stressed. So aside from the massive ecological consequences of drawing down further freshwater, diverting it, I don’t know. What has happened historically in The US when you’ve had water diversion for industry, especially agro industry, is massive fire risk. Right?  We’ve all seen what happens to California in the summer now. And now loads of data centres are being built across the arid parts of California, and they will not work. They will catch fire and shut down, and the models will stop working if they’re not cooled with vast quantities of fresh water. Right? So the guys building them have every intention of cooling them down with vast quantities of fresh water. 

You don’t spend $500,000,000,000 on a data centre package if you are comfortable with it melting straight away. Right? So there is a genuinely huge ecological threat and the livelihood threat associated with that, which almost always lands on the most marginalised communities because, you know, when people dump massively polluting and water stress creating industries, they don’t usually do it in the most affluent neighbourhoods, right, because those people are well organised and well networked and everything. And, yeah, this is a serious part of it. And I think that the rush to scale that has seen this sort, of everyone’s just seems to have accepted that the only AI future is the one that we’re allowing Zuckerberg and Altman and people to lay out for us, which is 3 or 4 giant companies compete to buy up all of the world’s NVIDIA chips and create more and more data centres. Right? Maybe there are things that AI can do differently that don’t require this more and more bigger and bigger scale operation. But if that’s the path we’re going down, right, 

Heather Taylor: And there’s already an energy use problem, though, isn’t there? There’s if there’s already an energy use problem, you know, so we’re kind of using energy for something we don’t need because we didn’t have it a little while ago. So it’s not something that we need. You know? Yeah. So even before we think about making it bigger, the fact that it’s even in existence now is a quite a concern. 

Paul Gilbert: And this is also why I find this sort of inevitability frame that people use to talk about this so troubling. Right? Whenever someone uses the framing of inevitability to talk about a new technology, they’ve got an interest in it. Right? Because it hurries things up and it presents people who have questions as, you know, just getting in the way because this is going to happen, right?  So get on board. And this is explicitly what we’re hearing from our local MP, our science technology administrator. 

Heather Taylor: That’s what I thought, and I’ve not got any stakes in it.  

Paul Gilbert: Things are made to be inevitable because powerful actors tell you they’re inevitable. There’s nothing inherently inevitable about this.  Most people don’t even know what it can do when it works, and yet we’re accepting it’s inevitable. Right? I think that’s 

Wendy Garnham: Is there another side to the inevitability, though, which is that it’ll eventually fold in on itself because eventually you’ll be feeding the machines with information that it itself has generated, I mean, is that a possibility? 

Paul Gilbert: Possibly. I mean, the Internet is already so full of slop. Right? Just AI generated garbage. And there are examples of that.   I think I can’t remember who it was earlier in the discussion. I was talking about translation. Right? And some of the folks at Distributed AI, research institution, Timnit Gebru and colleagues, looked at these claims that Meta and others have made about massive natural language processing models that could translate 200 languages. Right? 

And they looked at a series subset of African languages, which the research team spoke, and they found that it was really bad. Right? But it was also bad because it had been trained on websites that were translated by Google Translate. Right? And then so when you gave it vernacular, or vernacular twee, it was just absolute rubbish came out. Right? So that’s already happening. And then you think, well, what was that? Like, what is it for? Just the kind of like you were talking about neophilia. Right? People want new things. People want to move fast and break things – but why? What benefit is this going to bring us and is it worth it? 

Niel De Beaudrap (50:40): Yeah. Yeah. It’s the slogan of a particular company to move fast and break things, and they had reasons for wanting to do that. It’s because it made them money. 

Paul Gilbert: Yeah. Yeah. And speaking about those companies, you know, you’re saying this is a recent thing. A few years ago, the Silicon Valley giants were presenting themselves as green. Right?  Go back a couple of decades. Google’s slogan was Don’t Be Evil, which I think just, you know, became funny after a while. But, like, you know, a few years ago, Microsoft was promising they weren’t only going to be, like, carbon neutral. They’re going to become a negative. Right?  Really leaning into renewable energy, carbon capture and storage, which is a whole other story about how it may not actually work. But, you know, there was this sort of green vibe they were going for. After the kind of boom in , LLMs from the ChatGPT 3 launch, they’ve all just chucked their carbon neutral policies, right, straight out the window and back to coal, back to gas because we need those data centres. Right? Is this a good time to be doing that?  When this is of unclear utility to us, and potentially poses a threat to jobs and can’t do half of the things 

Heather Taylor: By the way, it’s boiling in here. 

Paul Gilbert: Yeah. Yeah. So, you know, can we help students use these tools in ways that are ethical? Well, we’ve got to ask, are these good for our students? Right?  Can we actually evidence that, not assume it because someone has told us it’s inevitable? And once we’ve figured that out, is it worth it? Right? There’s a whole bunch of things we can all do that make our lives easier. Are they all worth it? 

Wendy Garnham (52:15): 
So that brings us to our last question. So, Niel, I’m going to direct this to you first. Obviously, educators will have varying views on AI in higher education. But for now, it is something that we all must contend with. So with that in mind, what advice would you give to colleagues in terms of AI in higher education? 

Niel De Beaudrap: Apart from sort of acknowledging that it’s there and maybe sort of addressing students and encouraging colleagues to address the students to sort of talk about AI and the things that it purports to offer and how it might fall short, the main thing that I would do is basically to not use it yourself, to not feed into it. Like, the more that we try to use these tools in order to cut corners and things like this in our own work, not only is it providing a bad model for the students, like, I think that students can tell when you put in some efforts into anything from designing a script for your lectures or your slide deck or a diagram or something like this. If they see it modelled for them that it is quite normal to just ask a computer to spit something out according to some spec, they’re just going to think of it as a thing that one should do. It’s a thing that I systematically don’t do. I’ve never used any of these tools because I’m not interested in feeding the machine that I think is going to undermine the things that I care about. I would encourage colleagues to ask themselves if they actually want to be using a machine that’s going to have this sort of effect. 

Wendy Garnham: 
Right. Same question to you, Paul. What advice would you give to colleagues in terms of AI in higher education? 

Paul Gilbert: 
I think my answer is quite similar to Niel’s. I fully agree. You know, don’t use it and then expect your students not to. Yeah. Come on. I have used it twice to show my second years how terrible the answer to one of the assignments would be if they asked ChatGPT and why they are better than it. Right? And I think what we can do, as well as kind of getting our students to think about the data infrastructures behind this, its limitations, what it can’t tell them, how not smart it is, how destructive it can be. We can also just, I think, put more work into highlighting for our students how much better they are than these models. Right? There’s all this discussion of, like, oh, you know, we’re going to replace, like, lawyers and radiologists and stuff. And, you know, I’m not sure how true that all is, you know, if you replace all the junior lawyers so no-one has to read through court documents who’s in five years going to be a senior lawyer. Right? You gotta have the foundations. Right? 

So, again, there’s a lot of inevitability frame and hype, which I think we need to cut through. But, also, like, we have a lot to offer in higher education to our students, and they have a lot to offer us that cannot be replicated, reproduced, or displaced by ChatGPT and finding space in the classroom to emphasise that. Right? Even if that is explicitly saying, look, like, this is garbage. It’s powerful and it’s big and it’s fast and it looks shiny, but you guys are smarter than it. Right? And I’m not just saying that genuinely, like, my students produce better stuff than AI could, and I think that’s true for a lot of us, right? And we need to give them that, like, trust and meet them on that terrain rather than assuming they’re all, you know, just itching to fake their essays. Yeah. And then, you know, some people will, and that’s always been the case, and, you know, it will continue ever thus, right? There’s always been plagiarism and personation. We can’t stop it, right? But we can, highlight the things that AI can never do and encourage our students to value that in themselves. 

Niel De Beaudrap: There’s something that I could add, by the way. So there’s something that I’ve been thinking about that I – it feels a bit weird to ‘yes and’ what Paul was saying about the ecological and the sociological impact, which as far as I’m concerned, should be the sort of the conversation stopper in terms of the ethical use of AI, but about the notion that these are stillborn gods that we can just work to improve them. Well, I mean, this is a part of the technophilia, sort of the technophilic impulse in the computer industry generally. We can sort of look at theother things that happened at scale. For example, Moore’s Law, where computing power became larger and larger, computers became faster and faster. This didn’t always make our software better. In fact, it made it worse in a lot of respects because people stopped valuing writing code well. So even as things scale up, this isn’t going to be a guarantee of an improvement in quality. In fact, if the past is any indication, things will get worse. So even after burning the world, we may not have anything even particularly nice to show for it. 

Wendy Garnham: Simon, as a learning technologist, do you want to add anything before we close our podcast? The role of AI in higher education. 

Simon Overton (57:34): I think, the only thing that I would say that I feel is perhaps a little bit hopeful, and it’s not just limited to education, is that I believe that the use of AI and the proliferation of slop, great band name, by the way, is going to lead us to value things that are real a lot more. When I was at university, I really loved the essay, The Age of the Work of Art in the Age of Mechanical Reproduction, I think it’s Walter Benjamin, which was that people were worried that if we could produce posters of, you know, the Van Gogh Sunflowers that we wouldn’t value the original anymore. But that’s not what happened. It actually became more and more valuable. So I think and I hope and I believe that it’s all quite new and quite scary for us now, but I think that it will encourage us to value the things that Paul said just now that we can have and that can come out of the time and the relationships that we establish it in higher education. So and I think that ultimately that’s probably a good thing even though it looks kind of scary. 

Heather Taylor: I would like to thank our guests, Niel and Paul. 

Niel De Beaudrap: Thank you. 

Paul Gilbert: Thank you. 

Heather Taylor: And thank you for listening. Goodbye. This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes, as well as articles, blogs, case studies, and infographics, please visit blogs.sussex.ac.uk/learning-matters

Tagged with:
Posted in Podcast

About this blog

Learning Matters provides a space for multiple and diverse forms of writing about teaching and learning at Sussex. We welcome contributions from staff as well as external collaborators. All submissions are assigned to a reviewer who will get in touch to discuss next steps. Find out more on our About page.

Please note that blog posts reflect the information and perspectives at the time of publication.