{"id":1616,"date":"2026-01-30T12:11:59","date_gmt":"2026-01-30T12:11:59","guid":{"rendered":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/?p=1616"},"modified":"2026-03-20T15:04:26","modified_gmt":"2026-03-20T15:04:26","slug":"episode-13-generative-ai-and-higher-education","status":"publish","type":"post","link":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/2026\/01\/30\/episode-13-generative-ai-and-higher-education\/","title":{"rendered":"Episode 13: Generative AI and Higher Education"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large is-style-rounded\"><img loading=\"lazy\" width=\"1024\" height=\"769\" src=\"https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-1024x769.jpg\" alt=\"\" class=\"wp-image-1617\" srcset=\"https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-1024x769.jpg 1024w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-300x225.jpg 300w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-768x577.jpg 768w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-1536x1153.jpg 1536w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-2048x1538.jpg 2048w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-100x75.jpg 100w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-150x113.jpg 150w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-200x150.jpg 200w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-450x338.jpg 450w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-600x451.jpg 600w, https:\/\/blogs.sussex.ac.uk\/learning-matters\/files\/2026\/01\/PXL_20250630_140603217.RAW-01.COVER_-900x676.jpg 900w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption>Our guests: Dr Paul Robert Gilbert and Dr Jacqueline De Beaudrap, with producer Simon Overton and hosts Dr Heather Taylor and Prof Wendy Garnham.<\/figcaption><\/figure>\n\n\n\n<p>The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by&nbsp;<a href=\"https:\/\/profiles.sussex.ac.uk\/p10660-wendy-garnham\">Prof Wendy Garnham<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/profiles.sussex.ac.uk\/p289961-heather-taylor\">Dr Heather Taylor<\/a>. It is recorded monthly, and each month is centred around a particular theme. The theme of our thirteenth episode is \u2018generative AI and higher education\u2019 and we hear from&nbsp;Dr <a href=\"https:\/\/profiles.sussex.ac.uk\/p275733-paul-gilbert\" data-type=\"URL\" data-id=\"https:\/\/profiles.sussex.ac.uk\/p275733-paul-gilbert\">Paul Robert Gilbert<\/a> (Reader in Development, Justice and Inequality) and Dr <a href=\"https:\/\/profiles.sussex.ac.uk\/p519543-jacqueline-de-beaudrap\" data-type=\"URL\" data-id=\"https:\/\/profiles.sussex.ac.uk\/p519543-jacqueline-de-beaudrap\">Jacqueline De Beaudrap <\/a>(Assistant Professor in Theoretical Computer Science).&nbsp;<\/p>\n\n\n\n<h2>Recording<\/h2>\n\n\n\n<p><a href=\"https:\/\/open.spotify.com\/episode\/2MYyhksZEP7moNICIRoCBb?si=BMLU-mVNR26jidWTKThGQQ\" data-type=\"URL\" data-id=\"https:\/\/open.spotify.com\/episode\/2MYyhksZEP7moNICIRoCBb?si=BMLU-mVNR26jidWTKThGQQ\">Listen to episode 13 on Spotify<\/a><\/p>\n\n\n\n<h2>Transcript<\/h2>\n\n\n\n<p><strong>Wendy Garnham:<\/strong>\u00a0Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is\u00a0Generative AI and\u00a0Higher\u00a0Education. Our guests\u00a0are\u00a0Dr Paul Gilbert, Reader in Development, Justice, and Inequality in Anthropology and\u00a0Dr Jacqueline De\u00a0Beaudrap, Assistant Professor in\u00a0Theoretical Computer Science\u00a0in Informatics.\u00a0Our\u00a0names\u00a0are\u00a0Wendy Garnham\u00a0and Heather\u00a0Taylor\u00a0and we are your presenters today. Welcome everyone.\u00a0<\/p>\n\n\n\n<p><strong>All:<\/strong>&nbsp;&nbsp;Hello.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;Paul, could you start by telling us a little about what you teach and how you see generative AI changing the way students approach your teaching?&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong> So,&nbsp;I&#8217;m&nbsp;based in&nbsp;Anthropology, but I&nbsp;actually primarily&nbsp;teach in&nbsp;International&nbsp;Development. And my teaching is divided into three areas, which&nbsp;sort of reflects&nbsp;my research interests. I teach a&nbsp;secondyear&nbsp;module on&nbsp;Critical&nbsp;Approaches to&nbsp;Development&nbsp;Economics. I teach a&nbsp;thirdyear&nbsp;module called&nbsp;Education,&nbsp;Justice, and&nbsp;Liberation, and&nbsp;I&#8217;m&nbsp;the&nbsp;coconvener&nbsp;with Will Locke of the new BA Climate Justice, Sustainability and Development. I also do some teaching from first year on&nbsp;Climate&nbsp;Justice.&nbsp;So, honestly, in terms of how students are approaching the teaching, apart from a couple of cases where I suspect there may have been AI involvement in some assignments, I&nbsp;haven&#8217;t&nbsp;noticed&nbsp;a huge difference.&nbsp;<\/p>\n\n\n\n<p>What is new, and maybe why I&nbsp;haven&#8217;t&nbsp;noticed that difference, is that, like a lot of my colleagues in&nbsp;International&nbsp;Development, we started engaging&nbsp;headon&nbsp;with AI and integrating it into what we teach about and how we talk to our students. For example, my&nbsp;secondyear&nbsp;development economics course is deliberately structured around global South perspectives on development economics and thinking that&nbsp;isn&#8217;t&nbsp;dominant.&nbsp;Economics is odd among the&nbsp;Social&nbsp;Sciences.&nbsp;There&#8217;s&nbsp;a lot of research on this. Sociologists like Marion Fourcade have looked at the relative closure of economics, the hierarchy of citations, its isolation from other disciplines, and the concentration of&nbsp;EuroAmerican&nbsp;scholarship. Professor Andy McKay in the Business School has also done work on the underrepresentation of African scholars in studies of African economic development, and on the tendency for scholars from the global South to be rejected disproportionately from global North journals.&nbsp;&nbsp;And&nbsp;so&nbsp;the reason that matters for thinking about teaching and AI is because AI,&nbsp;so large language models. Right? ChatGPT and Claude and everything,&nbsp;they&#8217;re&nbsp;basically fancy&nbsp;predictive text. Right? Emily Bender calls them synthetic text extruders. Right? You put some goop&nbsp;in&nbsp;and it sprays something out that sometimes looks like sensible language.&nbsp;<\/p>\n\n\n\n<p>But it does that based on its training corpus, and its training corpus is scraped from somewhere on the Internet. And it also has various likelihood functions within the large language model that make sure the most probable next word in the sentence comes, right, so that it seems sensible. And what that does is reproduce the most probable answer to an economic question, which is the most dominant&nbsp;one, which happens to be not the only&nbsp;one, but&nbsp;one&nbsp;from a very, very narrow school of thought that has come to dominate economics and popular economics and so on. And&nbsp;so&nbsp;all the kind of, minoritised perspectives, the ones that&nbsp;don&#8217;t&nbsp;make it into these, like, extremely hierarchically structured top tier journals, the ones that&nbsp;aren&#8217;t&nbsp;produced by Euro American scholars,&nbsp;you&#8217;re&nbsp;not going to get AI answering&nbsp;questions about them. And if you use it to do literature searches,&nbsp;it&#8217;s&nbsp;not going to tell you about them.&nbsp;<\/p>\n\n\n\n<p>So, that module is built&nbsp;around those perspectives. And&nbsp;so&nbsp;we&nbsp;kind of integrate&nbsp;that into the teaching&nbsp;as a way to&nbsp;highlight to students that large language models function in a way that effectively perpetuates&nbsp;epistemicide. Right? By making sure that it reproduces the&nbsp;most likely&nbsp;set&nbsp;of ideas and the&nbsp;most likely set&nbsp;is always the dominant set, right, from whoever is most networked and most online, then perspectives that have already been disproportionately not listened to get&nbsp;less and less&nbsp;visible. So structuring the course around precisely those not so visible but&nbsp;really important&nbsp;and&nbsp;really consequential&nbsp;approaches to economic development forces students not only to not use it to do to do their literature searches, but to think about the kind of the politics of AI knowledge production.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor (04:57):<\/strong>&nbsp;That&#8217;s fantastic.&nbsp;I mean, we have,&nbsp;so we both teach in psychology, and&nbsp;we&#8217;ve&nbsp;got a problem in psychology where&nbsp;it&#8217;s&nbsp;all very Western dominated. You know, we use the DSM&nbsp;5 largely, which&nbsp;is American, as like the manual for diagnosing, you know, psychiatric disorders and so on. I teach&nbsp;clinical mainly, but, you know,&nbsp;it&#8217;s&nbsp;generally,&nbsp;there&#8217;s&nbsp;a Western dominance.&nbsp;&nbsp;And&nbsp;that would actually be a really interesting way to get them to think about where are these other perspectives and how like you&#8217;re saying, there&#8217;s sort of a moral question of using ChatGPT because it&#8217;s regurgitating stuff that&#8217;s overshadowing, you know, other important voices that aren&#8217;t being heard.&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah. And&nbsp;it&#8217;s&nbsp;also an opportunity to get students to think more about data infrastructures. Right? Because whether you use ChatGPT or Claude or Lama or whatever or not, I think we all could do with understanding a bit better how search works, how data infrastructures are put together, where things are findable or not. Right?&nbsp;<\/p>\n\n\n\n<p>So&nbsp;centring&nbsp;that means&nbsp;there&#8217;s&nbsp;a chance to reflect on that. And&nbsp;it&#8217;s&nbsp;not, you know,&nbsp;I&#8217;m&nbsp;not&nbsp;a big fan&nbsp;of AI for&nbsp;various reasons. We might go into it later. But rather than just standing up and going, this is awful, you can say, look, here are all these perspectives that are&nbsp;really important.&nbsp;We&#8217;re&nbsp;going to get to grips with them in this course, this is not what you will get if you ask ChatGPT for an answer. And&nbsp;it&#8217;s&nbsp;also part of a way of reminding students that they are smarter than ChatGPT, right? And I think&nbsp;that&#8217;s&nbsp;really important&nbsp;because there is some research, I think some people in Monash University in Australia and elsewhere looking at the kind of demoralising effects of people, so I think&nbsp;what&#8217;s&nbsp;the point of even trying? Right?&nbsp;<\/p>\n\n\n\n<p>I can do this and it can spew out something as good as what I can do. And it&nbsp;can&#8217;t. Right? And it&nbsp;can&#8217;t&nbsp;if you structure your questions right and if you get your students to think about its limitations. I don&#8217;t want to talk too much, but, just one thing that&#8217;s worth saying, and, Simon knows about this as well because he did some videos for us, but&nbsp;one&nbsp;of the ways that I try and get this across to the students is that we have this big, special collection in the library, the British Library for Development Studies legacy collection, and it&#8217;s full of printed material produced by scholars from Africa, Asia, Latin America in the sixties, seventies, eighties. A lot of it&nbsp;doesn&#8217;t&nbsp;exist in digital form. Right? It&nbsp;hasn&#8217;t&nbsp;been published as an eBook. You might find a scan somewhere.&nbsp;&nbsp;Right? Some of that knowledge&nbsp;doesn&#8217;t&nbsp;even turn up in the Sussex Library Search. Right? But&nbsp;it&#8217;s&nbsp;there in the stacks or in the folders and, you know,&nbsp;there&#8217;s&nbsp;a huge amount of insight and history captured in there that the Internet&nbsp;doesn&#8217;t&nbsp;know about. Right?&nbsp;Which means ChatGPT will never know about. And, you know, yet that reminds us that there are limitations even to Google searches and Google Scholar and everything, but, you know, the problems of LLMs are that on steroids. Right?&nbsp;So&nbsp;using that as a chance to show people why offline materials and stuff that&nbsp;isn&#8217;t&nbsp;widely&nbsp;disseminated&nbsp;or available open online&nbsp;isn&#8217;t, you know, to be discounted, and&nbsp;there&#8217;s&nbsp;a lot of valuable knowledge in there.&nbsp;<\/p>\n\n\n\n<p><strong>Heather\u00a0Taylor\u00a0(08:07):<\/strong>\u00a0So same question to you then,<strong>\u00a0<\/strong>Jacqueline. Could you start by telling us a bit about what you teach and how, sorry, how you see generative AI changing the way students approach your teaching?\u00a0 <\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong>\u00a0De\u00a0Beaudrap:<\/strong>\u00a0Alright. Well, I teach a couple of modules in informatics. And informatics, for those of you who\u00a0don&#8217;t\u00a0know this term,\u00a0it&#8217;s\u00a0another term for computer science.\u00a0But,\u00a0I end up teaching the introductory mathematics module for computer science where we try to introduce to them all the basic mathematical concepts.\u00a0That&#8217;s\u00a0the name of the module,\u00a0Mathematical\u00a0Concepts, that they might need throughout their career.\u00a0\u00a0And\u00a0we&#8217;re\u00a0not going to cover absolutely everything when we do that, but we try to cover a lot of ground with a lot of different ideas and how they connect to one another. And I also teach a master&#8217;s module in my own research specialty, which is quantum computation. And while these two things might seem a little bit different from one another, the fact that quantum computation, so setting aside sort of any excitement that might come with that, it&#8217;s something which you can only really come to grips with if you have a handle on a large collection of mathematical concepts. So, similarly to Paul, I\u00a0don&#8217;t\u00a0really know precisely how it would affect how they\u00a0engage with\u00a0either of these\u00a0modules.\u00a0I see some signs possibly for, what my modules were, the assessments that maybe, you know, the strange inconsistency sometimes with which the students, not only answer questions, but whether they are able to answer a question well or not, even if they&#8217;re very similar questions, sometimes has me scratching my head, but Idon&#8217;t try to infer anything about that.\u00a0<\/p>\n\n\n\n<p>But&nbsp;it&#8217;s&nbsp;sometimes signs about, you know, basically, when&nbsp;I wonder&nbsp;whether or not&nbsp;they have used a large language model&nbsp;in order to&nbsp;generate answers,&nbsp;it&#8217;s&nbsp;a question of&nbsp;basically how&nbsp;they are trying to engage with the subject in general. And, you know, if&nbsp;they&#8217;re&nbsp;relying on other tools to try to,&nbsp;basically learn&nbsp;about the subject matter.&nbsp;It&#8217;s&nbsp;very good&nbsp;if students try to use other materials, other sources from which to learn about a subject. But&nbsp;there&#8217;s&nbsp;the question of, you know, how&nbsp;will they can&nbsp;sort of judge&nbsp;the materials that they choose to learn from.&nbsp;<\/p>\n\n\n\n<p>I try to curate the approach in which, as I, you know, all teachers do, all lecturers do, try to curate a particular perspective, a particular approach from which they can learn about a subject. And if they use alternative sources, this is good. You know,&nbsp;a very good&nbsp;student can do that or somebody who might happen to find my explanation a bit puzzling.&nbsp;It&#8217;s&nbsp;much better that they look for other solutions, other approaches to learn than just sort of struggle along with my strange way of looking at things.&nbsp;But,&nbsp;ultimately, it does require that they be able to&nbsp;sort of critically&nbsp;evaluate what it is that&nbsp;they&#8217;ve&nbsp;been shown.&nbsp;You know, if they do use AI, if they do use a large language model and its explanations,&nbsp;first of all,&nbsp;they&#8217;re&nbsp;going to come across&nbsp;basically the&nbsp;popular presentation, which not only might be biased, but for a complicated subject might simply be wrong. It might lean very heavily on very, very reductive, very simplified presentations that you might see, for instance, in a popular science magazine, which is the bane of quite a few of my colleagues and&nbsp;me in particular. The fact that these things get collapsed, the fact that they get just flattened down into something where you have something that sounds like a plausible sound of words, a plausible thing that somebody can say but&nbsp;actually has&nbsp;no explanatory power. Not only is it sort of not the full answer, it&#8217;s that you can&#8217;t use it in order to understand it, but the students, whether or not they can recognize something like that, is something that I worry about, for mathematical concepts of the more elementary stuff, the more elementary module that I teach.&nbsp;So&nbsp;it&#8217;s&nbsp;possible that they might use a large language model to try to generate examples of something or another, but&nbsp;there&#8217;s&nbsp;the question of how reliably the large language model is generating things.&nbsp;Like, I actually haven&#8217;t kept on track of just how good language models are doing arithmetic, but if they can&#8217;t reliably sort of do any mathematics, then how are they going to learn anything from the large language model, let alone anything which has got more complicated structure, from which, you know, they&#8217;re trying to learn this slightly nuanced thing, even if it&#8217;s at the most elementary levels. The only way that you really get to grips with these techniques is by encountering&nbsp;it&nbsp;yourself, and, you know, so a large language model maybe, is going to be a little bit more successful at generating elementary examples of things because there are certainly mathematics textbooks aplenty, many examples from which you can draw some sort of corpus of how you explain a basic concept, but if they&#8217;re being sort of chopped and changed, how do you know that you&#8217;re going to get some sort of a consistent answer? This is the thing that&nbsp;I&#8217;m&nbsp;concerned with. I&nbsp;don&#8217;t&nbsp;know precisely how often it comes up for my students, but I do hear them just&nbsp;sort of casually&nbsp;referring that they will look up something by asking ChatGPT. It makes me wonder what the quality of information that&nbsp;they&#8217;re&nbsp;getting out of it is.&nbsp;<\/p>\n\n\n\n<p><strong>Wendy&nbsp;Garnham&nbsp;(13:11):<\/strong>&nbsp;I suppose&nbsp;that&#8217;s&nbsp;similar to&nbsp;the&nbsp;Coding&nbsp;in R&nbsp;that we&nbsp;see in psychology students, so they will often resort to using ChatGPT, and quite often it gets it wrong.&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;And&nbsp;then they get flagged.&nbsp;They&#8217;re&nbsp;very good&nbsp;at flagging it in the research.&nbsp;&nbsp;The research methods team are&nbsp;very good&nbsp;at&nbsp;identifying&nbsp;when AI did&nbsp;it, basically. But,&nbsp;yeah,&nbsp;I&#8217;m&nbsp;assuming because it just does things a weird way. You know?&nbsp;Yeah.&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;LLMs are astonishingly bad arithmetic. Like,&nbsp;almost amusingly&nbsp;bad. I think they can kind of cope up to, like, two, three figures and then you start,&nbsp;because&nbsp;it&#8217;s&nbsp;again,&nbsp;it&#8217;s&nbsp;predictive text. It&nbsp;can&#8217;t&nbsp;do maths. And I think one of the, like, really important things about bringing this into the classroom is, you know, part of what we mentioned earlier about, you know, students might lose confidence or lose motivation because they think, oh,&nbsp;ChatGPT&nbsp;can do it,&nbsp;I&#8217;ll just look it up on there. But&nbsp;there&#8217;s&nbsp;a lot of things that&nbsp;it&#8217;s&nbsp;super dumb at. Right? And I&nbsp;don&#8217;t&nbsp;think we should shy away from saying&nbsp;it&#8217;s&nbsp;really&nbsp;dumb&nbsp;at a lot of things, and we&nbsp;shouldn&#8217;t&nbsp;rely on it, and think about why that is. And&nbsp;it&#8217;s&nbsp;because something that is good at predicting the&nbsp;likely next&nbsp;word based on the specific set of words that it was trained on&nbsp;can&#8217;t&nbsp;do a novel mathematical problem, and&nbsp;there&#8217;s&nbsp;no&nbsp;reason why&nbsp;you would think it should.&nbsp;And&nbsp;there&#8217;s&nbsp;so much hype that equates large language models with human cognition that people seem willing to accept it can do a whole bunch of things that it&nbsp;can&#8217;t. Right? Even down to&nbsp;really trivial, sort of slips, like assuming&nbsp;it&#8217;s&nbsp;a search engine. Right?&nbsp;But&nbsp;it&#8217;s&nbsp;stuck in time. It can only answer things in relation to its training data.&nbsp;It&#8217;s&nbsp;not something that can&nbsp;actually search&nbsp;current affairs. Right? And that is something that is kind&nbsp;of&nbsp;surprising both how some students and some colleagues aren&#8217;t fully aware of that, which I think needs to be, like, the super basic minimal starting point is that people need to understand the data infrastructures that they&#8217;re messing with and what they can actually do and not do.&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor&nbsp;(15:18):<\/strong>&nbsp;You know,&nbsp;it&#8217;s&nbsp;really agreeable&nbsp;as well.&nbsp;So&nbsp;it depends on how students phrase questions or if they phrase&nbsp;it almost, is this right? Or, you know, because if I told it, I&nbsp;asked it to tell me that&nbsp;-this shows my boredom. But I asked it to tell me what the date was.&nbsp;Let&#8217;s&nbsp;say, for example, it was the June 19, and it told me, and I said, no,&nbsp;it&#8217;s&nbsp;not,&nbsp;it&#8217;s&nbsp;the June 20. And it said, oh,&nbsp;I&#8217;m&nbsp;so sorry.&nbsp;You&#8217;re&nbsp;right,&nbsp;it&#8217;s&nbsp;the June 20. I said, no.&nbsp;It&#8217;s&nbsp;not.&nbsp;It&#8217;s&nbsp;the June 19. Why are you lying to me?&nbsp;&nbsp;And I asked it, why is it so agreeable? And it was&nbsp;like,&nbsp;this is not my intention. You&nbsp;know? And&nbsp;it&#8217;s&nbsp;like,&nbsp;it&#8217;s&nbsp;really,&nbsp;this is a thing. You can tell it.&nbsp;You can feed it nonsense, and then it will go, yes, that nonsense is perfect. Thank you. You know?&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah. But this sorry. Like, to go back to I mentioned earlier Emily Bender, the linguist who writes a lot about large language models. And something that she says I think is&nbsp;really important&nbsp;is they&nbsp;don&#8217;t&nbsp;create meaning. If you impute meaning from what they create,&nbsp;that&#8217;s&nbsp;on&nbsp;you&nbsp;&nbsp;Right? But these are synthetic text extruders that produce statistically likely strings of words.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;So if&nbsp;I&#8217;m&nbsp;saying something, you go, no,&nbsp;that&#8217;s&nbsp;probable then.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0It\u00a0doesn&#8217;t\u00a0sound obviously wrong.\u00a0\u00a0<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;But if you take that meaning as meaning that it can think,&nbsp;that&#8217;s&nbsp;on you. Right?&nbsp;It&#8217;s&nbsp;just a thing that responds to prompts and spews out words and it looks like it can think but it&nbsp;can&#8217;t.&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0That&#8217;s been going on for ages as well from the earliest chat bots and like back to the 1960s, Eliza, where a lot of people were convinced right from the very beginning that they were talking to somebody who was real or that something that was actually thinking,\u00a0this might have been partly because of the mystique of computers in general. You know, the mystique of computers changes, but people\u00a0still remain\u00a0impressed by computers in a way which is slightly unfortunate, I think. Like,\u00a0they&#8217;re\u00a0very useful\u00a0devices. Speaking as a computer scientist, I mean, I enjoy thinking about what a computer can help me to do, but\u00a0it&#8217;s\u00a0good to have an idea of what is realistic to expect from them. And people have often sort of been looking for it to be something with which they can talk and even something that was just responding a sort of to a formula, something that in the 1980s everybody could have on their personal computer, this is not something which is very deep.\u00a0<\/p>\n\n\n\n<p>People are looking constantly for something to be relating back to it as a person. So you always have that sort of risk, but yeah, if it&#8217;s not actually accessing any information, if it&#8217;s not using the information in any particular way that could actually produce novelty and for which you can have confidence, that that novelty might mean something because it&#8217;s in some quarters correspondence with a model informed by the actual world around it. You always have that risk of people imputing upon it a depth, a meaning, and a usefulness that is far beyond what it&nbsp;actually has.&nbsp;<\/p>\n\n\n\n<p><strong>Wendy Garnham&nbsp;(18:10):&nbsp;<\/strong>I think&nbsp;we&#8217;ve&nbsp;touched on this a little bit,&nbsp;but Paul, do you think generative AI is affecting how students value subject&nbsp;expertise? And if so, in what way and what impact does it have?&nbsp;<\/p>\n\n\n\n<p><strong>Paul\u00a0Gilbert:<\/strong>\u00a0It&#8217;s a\u00a0really good\u00a0question and\u00a0it&#8217;s\u00a0quite a hard one to answer. I think, you know, you can imagine a risk where people think, oh,\u00a0what&#8217;s\u00a0the point in having a specialist because I can just ask\u00a0&#8211; it can tell me everything. Right? But we know it\u00a0can&#8217;t, and\u00a0I think a lot of students\u00a0are pretty switched on to that.\u00a0And, again,\u00a0I think this\u00a0is why\u00a0it&#8217;s\u00a0important to embed some of that, like, critical data literacy and critical AI literacy into the classroom. Just to pick up on something\u00a0Jacqueline\u00a0said about whether or not these large language models can produce novelty, produce new knowledge that is meaningful, I think it&#8217;s also worth thinking a bit more deeply about what we mean by\u00a0subject expertise, which isn&#8217;t just having access to loads of references and being able to regurgitate stuff. Right? Leaving aside the fact that, large language models often get references wrong and make them up, right?\u00a0Let&#8217;s\u00a0just pretend they can do that.\u00a0<\/p>\n\n\n\n<p>That&#8217;s still not what subject&nbsp;expertise&nbsp;is. Right? And a lot of it is about developing certain styles of thinking, certain critical capacities, abilities to see connections, right, in the social sciences. In in one way or another, a lot of disciplines talk about the sociological imagination or the ethnographic imagination or the geographical imagination. And&nbsp;it&#8217;s&nbsp;about a certain way of thinking and making connections.&nbsp;And having a kind of imaginative capacity that comes along with subject&nbsp;expertise, I think, is&nbsp;really important.&nbsp;There&#8217;s&nbsp;a bunch of work&nbsp;that&#8217;s&nbsp;been done by some people in Brisbane, in Australia, where they have queried a whole bunch of different chatbots and different generations of the same chatbot and so on, to ask about ecological restoration and climate change. And aside from the stuff that by now, I think, hopefully, most of us know about these large language models, that it returns, like, 80% of the results are written by Americans and white Europeans, that it ignores results from a lot of countries in the global south that do have a lot of work on ecological restoration, all these kind of, things which we call biases, but I see sort of structural inequities in the way these models are trained. I think one of the most interesting things they found, and again, it makes sense based on what we know about these models, is that they never tell you about anything radical. Because again,&nbsp;it&#8217;s&nbsp;a backwards looking probabilistic thing. Right?&nbsp;What&#8217;s&nbsp;the best way to deal with this problem? Okay. Well,&nbsp;it&#8217;s&nbsp;going to give you something from its training corpus, which is&nbsp;probably based&nbsp;on the policies that have been done in the past. Except if we, you know, we are facing a world of, runaway climate change and things that have been done in the past are not the things we need to keep doing. Right? And it just would not answer about things like degrowth or agroforestry or anything that, you know, I&nbsp;wouldn&#8217;t&nbsp;even think of agroforestry as particularly radical, but, you know,&nbsp;it&#8217;s&nbsp;not mainstream.&nbsp;Right&nbsp;&nbsp;It&#8217;s&nbsp;not been done a lot before. And&nbsp;so&nbsp;they just&nbsp;don&#8217;t&nbsp;want to talk about it. Right? And having the capacity to look at a problem, know something about what happened in the past, think about what the world needs, and be creative, be innovative, and have an imagination is something that a lot of our students at Sussex really have that capacity, and that absolutely cannot be replaced by a large language model. Right?&nbsp;And I think, as well as, emphasizing that ourselves, we need to encourage them to become aware of that capacity in themselves. Right? That, you know, you might feel overwhelmed or demotivated or you like, you want to ask ChatGPT or Claude or whoever, but, like, both what we are trying to, get across as subject expertise and what we want them to leave with massively exceeds anything that this kind of very poor approximation of intelligence can offer.&nbsp;<\/p>\n\n\n\n<p><strong>Wendy&nbsp;Garnham:<\/strong>&nbsp;Yeah. It sounds as though&nbsp;there&#8217;s&nbsp;a big role&nbsp;to play for like active learning, innovation, creativity in terms of how&nbsp;we&#8217;re&nbsp;assessing students and how&nbsp;we&#8217;re&nbsp;getting them to engage with this subject&nbsp;material&nbsp;I guess. So that&#8217;s music to my ears.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor&nbsp;(22:22)<\/strong>: Also&nbsp;in&nbsp;the same respect as that,&nbsp;we&#8217;re&nbsp;not meant to be information machines, you know? And I think if a student came to&nbsp;uni&nbsp;hoping to meet a bunch of information machines, well,&nbsp;they&#8217;d&nbsp;be wrong. Hopefully, not disappointed because&nbsp;it&#8217;s&nbsp;better than that. But, you know, I&nbsp;think also&nbsp;that, you know, teachers&nbsp;have the ability to&nbsp;say when they&nbsp;don&#8217;t&nbsp;know something and to present questions that can help them and the students try and start to figure out an&nbsp;answer to something. And, you know, I really love it&nbsp;actually when&nbsp;I&#8217;ll&nbsp;make a make a point or an argument in a workshop.&nbsp;<\/p>\n\n\n\n<p>I had this last year with one of my well, last time, one of my foundation students, and I was&nbsp;very pleased&nbsp;with my argument I put forward. I was showing them about how you make a novel but&nbsp;evidence based&nbsp;argument, you know, so where you take pieces of information, evidence from all over the place to&nbsp;come up with&nbsp;a new conclusion. And I was&nbsp;very pleased&nbsp;with this. Anyway, the student of mine, she was brilliant. She rebutted my argument, and it was so much better than mine.&nbsp;Right? It really was, and it was great. And that&#8217;s sort of as a teacher,&nbsp;that&#8217;s&nbsp;what you want to see happen. And I think with things like ChatGPT and any of these AI things,&nbsp;they&#8217;re&nbsp;not going to do that.&nbsp;They&#8217;re&nbsp;not going to encourage that, and&nbsp;they&#8217;re&nbsp;not going to know how.&nbsp;You know?&nbsp;They&#8217;re&nbsp;not there for&nbsp;aren&#8217;t&nbsp;asking questions. They ask&nbsp;you&nbsp;questions to clarify what&nbsp;you&#8217;re&nbsp;asking them to do, but&nbsp;that&#8217;s&nbsp;it. And I think,&nbsp;yeah, I completely agree with you.&nbsp;Students can get so much more out of their education, you know, by recognising that&nbsp;they&#8217;re&nbsp;so much more than information holders.&nbsp;&nbsp;You know?&nbsp;<\/p>\n\n\n\n<p><strong>Wendy Garnham:<\/strong>\u00a0So same question to you, Jacqueline. Do you think generative AI is affecting how students value subject\u00a0expertise? And if so, in what way and what impact does it have?\u00a0<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap\u00a0(24:15):<\/strong>\u00a0I think it does affect how students value subject\u00a0expertise. This is something that I see when assessing final year\u00a0projects or even just seeing what sort of project students propose for their final year project, where in computer science, as you can imagine, a lot of students are proposing things that involve creating some sort of model, not a large language model, but, you know, something that&#8217;s going to use AI to solve this and that, where they seem to have a slightly unrealistic expectation of how far they&#8217;d be able to get using their own AI model, and in particular, where they&#8217;re contrasting this to the effort that&#8217;s required in order to solve things with subject expertise, where they seem to think that this is something which is easily going to at least be comparable to, if not match or surpass the things that can come from people who&#8217;ve spent a lot of time thinking about a particular situation, a particular subject matter. They think that a machine\u00a0that&#8217;s\u00a0just by crunching through numbers quickly enough is going to be able to surpass that. And, you know, they, of course, learn otherwise to a greater or lesser extent,\u00a0basically to\u00a0a greater or lesser extent that they notice that they have not actually met their\u00a0objectives, that what they hoped to be able to achieve.\u00a0The fact\u00a0&#8211;\u00a0it&#8217;s\u00a0more the fact that they have that aspiration in the first place. Now I mean part of it of course again in computer science,\u00a0there&#8217;s\u00a0going to be a degree of\u00a0neophilia.\u00a0They&#8217;re\u00a0enthusiastic about computers and why not.\u00a0They&#8217;re\u00a0enthusiastic about new things that are coming about computers and why not.\u00a0It&#8217;ll\u00a0be just a matter of the learning process itself that\u00a0maybe some\u00a0of these things\u00a0aren&#8217;t\u00a0quite all that\u00a0they&#8217;re\u00a0hyped\u00a0up to be.\u00a0But that&#8217;s\u00a0sort of where\u00a0they&#8217;re\u00a0starting up from, this idea that just sheer technology can somehow surpass careful consideration. I find that a little bit worrying.\u00a0<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;<br>Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners?&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0<br>I&#8217;m not actually an education expert, so I\u00a0can&#8217;t\u00a0really say whether\u00a0it&#8217;s\u00a0likely that there are\u00a0ways that generative AI can help,\u00a0come up with\u00a0ways, you know, to help student learning. I have seen examples where, a native Chinese speaker was trying to use this to translate my notes into Chinese. So, okay, there might be some application along those lines, just trying to find ways of, translating natural language whether we know, you know, a generative AI such as we&#8217;ve been thinking about them now to do that well, I&#8217;m not in a position to say.\u00a0Conceivably, as\u00a0I\u00a0sort of thought\u00a0about before,\u00a0maybe they\u00a0can be used to\u00a0come up with, examples of toy problems\u00a0or,\u00a0simple examples. I guess you could say supplementing the course materials\u00a0in order to\u00a0try to come to grips with some\u00a0particular subject.\u00a0<\/p>\n\n\n\n<p>&nbsp;<br>That&#8217;s something that I can imagine&nbsp;one&nbsp;might try to use generative AI where&nbsp;it&#8217;s&nbsp;possible that things&nbsp;won&#8217;t&nbsp;go very&nbsp;badly wrong. Apart from that, I guess I&#8217;m sort of, viewing things through the framing of the subjects that I teach in my own particular interests, which basically involve the interconnectedness of a lot of technical ideas and ones that I find fascinating, so I&#8217;m going to be extra biased about that sort of thing. About, you know, learning about&nbsp;various ways&nbsp;that you can understand and measure things and structure them&nbsp;in order to&nbsp;get&nbsp;a&nbsp;idea of a bigger whole of how you can solve problems. And you&nbsp;can&#8217;t&nbsp;solve problems without being able to have the tools at hand to solve the problems. Okay.&nbsp;Some people might say, okay,&nbsp;maybe an&nbsp;AI can be such a tool, but before you can rely on a tool, you&nbsp;have to&nbsp;know how to use it well. You&nbsp;have to&nbsp;know how&nbsp;it&#8217;s&nbsp;working. You&nbsp;have to&nbsp;know what the tool is good at. So even if you want to say,&nbsp;shouldn&#8217;t&nbsp;this just be&nbsp;one&nbsp;tool among others, I&nbsp;don&#8217;t&nbsp;see a lot of evidence of,&nbsp;first of all, where it is a reliable tool. The thing that you absolutely&nbsp;have tohave of a tool is that you can rely on it to do the job that you would like it to do.&nbsp;Otherwise, I mean, you can use a piece of rope as a cane, but&nbsp;it&#8217;s&nbsp;not going to be&nbsp;very helpful&nbsp;to you.&nbsp;Maybe you&nbsp;just need to add enough starch or&nbsp;maybe you&nbsp;should look for something other than a piece of rope.&nbsp;And this just builds upon itself. The way that you build expertise, the way that you can become particularly good at something is by spending a lot of time thinking about the connections between things, by looking, asking yourself questions about, you know, what does this have to do with that? You know, is that thing that people often say&nbsp;really true?&nbsp;Only by&nbsp;sort of really&nbsp;engaging yourself in something like this can you really make progress on that and really become particularly good at something. And by&nbsp;sort of devolving&nbsp;a certain amount of that process okay. Again,&nbsp;there&#8217;s&nbsp;a reason why&nbsp;I&#8217;m&nbsp;in computer science. There are some things, like summing large columns of numbers. Is it possible that we have lost something by not asking people to systematically sum large columns of numbers?&nbsp;<\/p>\n\n\n\n<p>Have we lost something important about the human experience or&nbsp;maybe just&nbsp;the management experience by devolving that to computers? Well, there will be some tasks, maybe, where&nbsp;it is useful to have a computer, not speaking of LLMs, but computers in general, having them solve problems rather than having us spending every waking moment doing lots of sums or doing something&nbsp;more or less tedious. There will be some point at which the&nbsp;trade off&nbsp;is no longer worth paying. And I believe that trade off, you know, happens well below the point where you are trying to really come to grips with a subject and with learning.&nbsp;So&nbsp;the only thing that&nbsp;one&nbsp;really wants to have a lot of for learning is a lot of&nbsp;different ways&nbsp;of trying to see something, a lot of different examples, a lot of ways of trying to approach a difficult topic. And beyond that, cooperating with others,&nbsp;that&#8217;s&nbsp;a different form of engagement.&nbsp;It&#8217;s&nbsp;a way of swapping information with somebody, ideally, people who are also similarly engaged with the subject. It&nbsp;doesn&#8217;t&nbsp;help, of course, for you to ask somebody who is the best in your class and then just take their answer.&nbsp;That&#8217;s&nbsp;the same sort of problem that&nbsp;one&nbsp;has with LLMs, is relying on something else without engaging on the subject yourself. The most important&nbsp;thing is to&nbsp;sort of try&nbsp;to draw the line at a point where the students are consistently engaging with the learning themselves with the difficult subjects.&nbsp;And if the things that we, if the resources that they usually have to hand&nbsp;aren&#8217;t&nbsp;quite enough, well&nbsp;that&#8217;s&nbsp;not necessarily something that you solve with LLMs. You can solve that problem by providing more&nbsp;resources generally, and&nbsp;that&#8217;s&nbsp;a larger structural problem in society, I think, that&nbsp;maybe can&nbsp;be drawn on, but that&#8217;s&nbsp;not quite about&nbsp;what LLMs are about.&nbsp;<\/p>\n\n\n\n<p><strong>Wendy&nbsp;Garnham&nbsp;(31:07):<\/strong>&nbsp;It&nbsp;sort of sounds&nbsp;a little bit like&nbsp;we&#8217;re&nbsp;saying that the purpose of education is changing so that&nbsp;it&#8217;s&nbsp;more about encouraging or supporting students to be creative problem solvers or innovative problem solvers. Would you say that is where&nbsp;we&#8217;re&nbsp;heading?&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0I\u00a0don&#8217;t\u00a0know if I can say where we are\u00a0actually heading. I think, obviously,\u00a0it&#8217;s\u00a0good to be a good creative problem solver. And there\u00a0has always been,\u00a0I guess\u00a0there&#8217;s\u00a0the question of\u00a0whether or not\u00a0originally\u00a0we spent more time doing things by rote. You know, you solve an integral and you\u00a0don&#8217;t\u00a0ask why\u00a0you&#8217;re\u00a0solving the integral.\u00a0We&#8217;ve\u00a0had tools to help people with things like, you know, complex calculations, a little, you know, slightly annoying\u00a0picky\u00a0fiddly details for a long time.\u00a0Like so for example, a very, very miniature example of the same thing that we have with LLMs in mathematics is the graphing calculator, where you had something that was a miniature computer. It\u00a0wasn&#8217;t wouldn&#8217;t break\u00a0the bank, although\u00a0it&#8217;s\u00a0not as though everybody could afford it. But you could punch into it, ask it to solve a derivative, to solve an integral, to plot a graph. All sorts of things that, once upon a time were solely the domain of people, like before the 1950s and that, you know, particularly how to sketch a graph\u00a0was something that was\u00a0actually taught. Here are the skills, here are the ways that you\u00a0can not\u00a0precisely draw what the graph is like, but to have\u00a0a good idea\u00a0what\u00a0it&#8217;s\u00a0like, it would give you a good qualitative understanding. And even now, even though I do not draw very many graphs myself, the fact that I was taught that means that I have certain intuitions about functions that maybe somebody who\u00a0hasn&#8217;t\u00a0been taught how to do that,\u00a0wouldn&#8217;t\u00a0have. Now does that mean that I think that we should, you know,\u00a0basically everybody\u00a0should always be doing everything all the time with pencil and paper? No. There will be some sort of\u00a0trade offs.\u00a0<\/p>\n\n\n\n<p>It&#8217;s a question, you know, with the creative problem solving.&nbsp;It&#8217;s&nbsp;good to have people spend more of their time and energy trying to solve problems creatively, to think about things, engage with them, rather than being constantly boiled down, you know, caught up in doing things by rote. Now as far as the creative part goes, well you know, if you put the students in a situation where they are tempted to put the creative part into basically the hands of something which or the digital hands, the metaphorical hands of a text generator, then it&#8217;s not going to teach them how to engage with their subject in a way that they could deal with it creatively.&nbsp;It&#8217;s&nbsp;like most things that you&nbsp;have to&nbsp;know what the rules of the game are before you can interpret them well or before you can know where you can break them. If true creation is&nbsp;coming up with&nbsp;something which&nbsp;hasn&#8217;t&nbsp;been seen before where you realize&nbsp;actually&nbsp;we can do things differently and, of course, in mathematics and computer science,&nbsp;you&#8217;re&nbsp;concerned also with whether&nbsp;it&#8217;s&nbsp;technically correct.&nbsp;<\/p>\n\n\n\n<p>The rules that we have in place&nbsp;aren&#8217;t&nbsp;in place because it is the only correct way to do things. It is because the way that we can do things that we are confident will work out. That&nbsp;doesn&#8217;t&nbsp;mean that is the only way that things can be done. But you can only see these things if you know how the system was set up in the first place. If you know how to work with the existing tools that we have through the mastery of them, can you figure out how you can do things differently in a way that will work well?&nbsp;And if you always basically devolve all the hard stuff, the so called hard stuff, to a computer that&#8217;s not, of course, not going to tell you anything new, certainly not if it accidentally spits out something at random that, is sort of a novel random combination of symbols, it won&#8217;t be able to tell you why it should work in any way that you can be confident of. You need to be able to have the engagement with the subject itself&nbsp;in order to&nbsp;even recognize anything that could be an accidental novelty.&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor&nbsp;(34:59):<\/strong>&nbsp;Same question&nbsp;to you then, Paul. Do you think there are benefits of generative AI for student learning? If so, how can universities help students use these tools in ways that are ethical and supportive of their development as learners?&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;So my&nbsp;instinctive answer is, honestly, no. I&nbsp;don&#8217;t&nbsp;think there are benefits. Right?&nbsp;Maybe there&nbsp;are, but I&nbsp;haven&#8217;t&nbsp;been shown them yet. Right?&nbsp;<\/p>\n\n\n\n<p>                                                                                                                                                                                                            And I think the reason I want to say that is because there is so much hype and so much of a framing of inevitability about these discussions that frequently people are saying, well, if we haven&#8217;t figured out how they&#8217;re going to improve student learning, don&#8217;t worry, it&#8217;ll come. Right? And&nbsp;we&#8217;ll&nbsp;know. If&nbsp;you&#8217;re&nbsp;going to change pedagogy and change the curriculum and change how we teach, show us first how&nbsp;it&#8217;s&nbsp;better. Right?&nbsp;&nbsp;I think&nbsp;there&#8217;s&nbsp;a reasonable way around to do that. Tech journalists, I think&nbsp;it\u2019s&nbsp;Edward and&nbsp;Waiso, or&nbsp;possibly another&nbsp;journalist, talks about the way people tend to treat LLMs as stillborn gods. In other words, these are like perfect&nbsp;all powerful&nbsp;things that just&nbsp;aren&#8217;t&nbsp;quite there&nbsp;yet,&nbsp;right?&nbsp;So&nbsp;if you just hang on,&nbsp;they&#8217;ll&nbsp;improve our learning. Right?&nbsp;And we saw this in the discussions&nbsp;we&#8217;ve&nbsp;had at the university that the Russell Group principles on AI make a whole series of claims about how AI can improve student learning without a single example or footnote or reference. Right? And yet we can find pedagogical research showing that it can&nbsp;actually undermine&nbsp;confidence, undermine motivation, all these kinds of things. Right? So, until people can show me that&nbsp;it&#8217;s&nbsp;good, my instinct is to say no.&nbsp;<\/p>\n\n\n\n<p>And I know, you know, to dig into that a little bit deeper, I know there are claims made about how it can be important for assistive technologies. And it might be that&nbsp;that&#8217;s&nbsp;true in certain cases. But there&#8217;s equally evidence that it can be disabling.&nbsp;And there are some colleagues in Global, Lara Coleman and Stephanie Orman, working on this. And part of this is also about&nbsp;the what&nbsp;it means to learn.&nbsp;&nbsp;<\/p>\n\n\n\n<p>So&nbsp;something I sometimes go through with my students when&nbsp;I&#8217;m&nbsp;trying to explain what we want from them, why giving example after example after example after example&nbsp;won&#8217;t&nbsp;get you a first.&nbsp;&nbsp;And I show them that Bloom&#8217;s&nbsp;Taxonomy&nbsp;Triangle. I&nbsp;don&#8217;t&nbsp;know if you come across that in teacher&nbsp;training and stuff,&nbsp;where it starts at the bottom, the bottom layer is like recall. You show you&nbsp;evidencethis learning by just reproducing something&nbsp;you&#8217;ve&nbsp;memorised, and it goes on to understanding.&nbsp;&nbsp;You show, you know, what it means, how it works, and then you&nbsp;kind of apply&nbsp;your knowledge to new situations, draw connections between it, evaluate it, create something new. Right? And&nbsp;so&nbsp;this is a useful tool to show, like, if you keep just building loads of the bottom layer with more examples,&nbsp;you&#8217;re&nbsp;going to not get higher than a&nbsp;2:2. Right?&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;That\u2019s&nbsp;very good&nbsp;to&nbsp;know, by&nbsp;the way.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;And equally, if you go straight to, like,&nbsp;I&#8217;m&nbsp;going to create something new at the top, but you&nbsp;haven&#8217;t&nbsp;built a foundation of knowledge, then&nbsp;you&#8217;ve&nbsp;got a&nbsp;rubbish&nbsp;pyramid that will fall over.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;What&#8217;s that called again?&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Bloom&#8217;s&nbsp;Taxonomy.&nbsp;It&#8217;s&nbsp;now quite old, like, constructivist pedagogy example.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;It&#8217;s&nbsp;very useful, though.&nbsp;<\/p>\n\n\n\n<p><strong>Wendy Garnham:<\/strong>&nbsp;Put&nbsp;it into ChatGPT.&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;Yeah. I&nbsp;like it&nbsp;&#8211;&nbsp;&nbsp;\u2018You&#8217;re just at the bottom, mate\u2019.&nbsp;<\/p>\n\n\n\n<p><strong>Paul\u00a0Gilbert\u00a0(38:16):<\/strong>\u00a0But\u00a0it\u00a0kind of\u00a0it&#8217;s\u00a0also\u00a0useful to show that just learning things and regurgitating them is not what\u00a0we&#8217;re\u00a0looking for.\u00a0That&#8217;s\u00a0not the kind of higher processes of learning. Right? And I think if\u00a0I&#8217;ve\u00a0been a bit disappointed to see people embrace the idea that, LLMs can be\u00a0really useful\u00a0in, creating study guides or creating summaries of articles and so on. So, well, because what\u00a0you&#8217;redoing there is\u00a0you&#8217;re\u00a0outsourcing the process of creating the understanding, creating the knowledge object that moves you up the pyramid.\u00a0And then if instead of understanding a text, you asked ChatGPT to summarise it for you, you&#8217;ve essentially just stuck yourself in the remember and regurgitate bit lower down because you&#8217;ve made it shorter, but you&#8217;ve not done that work to generate the understanding and the applications and make those connections with other readings you&#8217;ve had and the kind of things\u00a0Jacqueline\u00a0was talking about. And\u00a0so\u00a0I&#8217;m\u00a0really cautious\u00a0of a lot of the hype and inevitability framing around the idea these are going to be great for assistive technologies.\u00a0They&#8217;re\u00a0going to improve people&#8217;s\u00a0learning and stuff.\u00a0If\u00a0that&#8217;s\u00a0the case, evidence it before spending\u00a0loads of money and reshaping higher education around it. And I think, you know, something else I wanted to add on to that is, you know, you were also talking a bit about, there are potentially maybe cases where LLMs could be useful.\u00a0Right? You could create little\u00a0problems\u00a0or you know, okay. Sure. And there was an article recently in the Teaching and Learning Anthropology journal that sort of made this case. And superficially, when you start reading it, you think, okay,\u00a0they&#8217;re\u00a0talking about something\u00a0similar to\u00a0what\u00a0we&#8217;re\u00a0talking about here. Right?\u00a0They&#8217;re\u00a0saying, we want to reject this model of education\u00a0that&#8217;s\u00a0just regurgitating facts. Right? I think\u00a0we&#8217;re\u00a0all on that page.\u00a0And ChatGPT is an opportunity to get students to interrogate ideas, to ask\u00a0what&#8217;s\u00a0wrong with things, to engage in dialogue. And they even use the language of, the Brazilian educator Paulo Freire to criticise the banking model, right, where you just pour ideas into the passive student&#8217;s head and they vomit it out. Right? None of us want to be doing that. But what kind of got me a bit frustrated with that paper is the other half of what Paulo Freire was talking about\u00a0&#8211; banking education is bad. What you want is critical pedagogy, where people come to an understanding of their place in the world and the structures that create oppression and unfreedom so they can use that knowledge to make a better world to achieve liberation. Right? And it genuinely boggles my mind that people will invoke Freire in a discussion of ChatGPT and not mention the political economy of AI.\u00a0<\/p>\n\n\n\n<p>Right?&nbsp;It&#8217;s&nbsp;deeply frustrating that&nbsp;there&#8217;s&nbsp;this sort of sense that, oh, well,&nbsp;it&#8217;s&nbsp;incidental to this&nbsp;whole discussion about education that&nbsp;we&#8217;re&nbsp;talking about some of the greatest concentration of wealth and power in history. Right? That is also premised on an&nbsp;absolutely insane&nbsp;expansion of energy and water usage. And&nbsp;it&#8217;s&nbsp;hardwired into the model. Because of this understanding that these models are, you know, stillborn gods and&nbsp;they&#8217;re&nbsp;going to be perfect when we get them right,&nbsp;that&#8217;s&nbsp;the justification for Zuckerberg, Sam Altman, all of them. Their model is all about scale,&nbsp;more and more,&nbsp;bigger&nbsp;and bigger, more data centres, more, in video processing units. And that means more energy usage, and it means more water to cool those.&nbsp;&nbsp;<\/p>\n\n\n\n<p>And&nbsp;so&nbsp;I think last year, there was a study that showed worldwide AI data centre usage emitted the same amount of carbon as Brazil, right, which is a big&nbsp;agro&nbsp;industry emitter.&nbsp;There have also been studies from UC Riverside suggesting that by 2 years&#8217; time, the worldwide data centre freshwater usage, to call it, will be about half of UK water usage. Right? And&nbsp;that&#8217;s&nbsp;concentrated in certain areas, typically in&nbsp;low income&nbsp;areas. And&nbsp;all of&nbsp;those studies were done before over the last few weeks. Zuckerberg&#8217;s announced 23,000,000,000 investment to expand data centres. OpenAI is trying to spend 500,000,000,000 building data centres, mostly running off fossil fuels. Right? Mostly&nbsp;located&nbsp;in low income, often water stressed environments. Right? So if that is the pathway to finally getting it right so that this model works, right, and, oh, yeah, we&#8217;ll iron out those hallucinations and it won&#8217;t give you fake references anymore, genuinely, what is wrong with you that you think that is a good pathway to an educational future? Right? Like, I&nbsp;don&#8217;t&nbsp;understand it. And Sussex has&nbsp;these sort of ambitions&nbsp;to be&nbsp;one&nbsp;of the world&#8217;s most sustainable universities and everything, and you&nbsp;can&#8217;t&nbsp;bracket that and pretend like&nbsp;it&#8217;s&nbsp;not applicable to engagement with technologies that have this political economy, that have this political ecology.&nbsp;<\/p>\n\n\n\n<p>And that political economy follows exactly from the claims about what they can do. Right?&nbsp;All of&nbsp;the claims about their magical power&nbsp;is&nbsp;based on scale. Right? These things are powerful because&nbsp;we&#8217;ve&nbsp;trained them on more data than you can&nbsp;possibly imagine.&nbsp;And we have more data centres, powering the models that are responding to more queries than ever that, you know, you&nbsp;couldn&#8217;t&nbsp;even imagine the scale of it. Right. Great. That pathway to refining these models is essentially&nbsp;ecocidal.&nbsp;Right? This is before you even get into the labour stuff and the fact that, you know, I really dislike the language of the&nbsp;Cloud because it implies a virtualism. Oh, The&nbsp;Cloud. Right? The&nbsp;Cloud is made of&nbsp;copper and plastic and lithium and silicon&nbsp;and stuff that is ripped out of the Earth.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap\u00a0(44:02):<\/strong>\u00a0So\u00a0It\u00a0invites you to think about it in an extremely vague way.\u00a0<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah. Right. Whereas, actually, what&nbsp;it is data centres&nbsp;largely in&nbsp;low income&nbsp;water stressed communities. Right? The same day that, last week, there was an FT&nbsp;leader, Zuckerberg, seeking 23,000,000,000 from private equity to expand his data centre.&nbsp;He&#8217;s just,&nbsp;there&#8217;s a story on a tech journalist web website 4 0 4 Media on the same day, but a community in Louisiana, one of the poorest towns in the state, that is going to have their utility bills go through the roof because a bunch of data centres are being built by OpenAI, which require construction new gas plants, and all those costs have to be paid for by&nbsp;someone, and it&#8217;s probably not going to be OpenAI. Right?&nbsp;Yeah. That is the sort of backstage on which these magical LLMs are unfolding. Right?&nbsp;And I think if you&#8217;re willing to have a discussion about the educational benefit of these tools without situating them in that political economy, that political ecology, you know, certainly, as someone who works in a kind of development studies department, that&#8217;s like a massive, you know, intellectual and moral failing.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;So, essentially, even&nbsp;if so, I mean, like you were saying,&nbsp;it&#8217;s&nbsp;all hypotheticals about whether eventually they can get AI to be magic. And there&#8217;s also obviously lots of, even before you go thinking about the environmental implications, there&#8217;s lots of implications of if you could make AI that perfect, what&#8217;s going to happen to people&#8217;s jobs and, you know, there&#8217;s all that side of things as well. But so, essentially, even if,&nbsp;this is a question, even if AI were to be magic eventually, yeah, and do everything that they want it to do or that we theoretically want it to do, I have no idea what I want it to do, The cost would be the world burning.&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah,&nbsp;some people&nbsp;don&#8217;t&nbsp;want to have serious discussions about that.&nbsp;That&#8217;s&nbsp;fine. But then just, you know,&nbsp;you&#8217;re&nbsp;not a serious person, I guess.&nbsp;<\/p>\n\n\n\n<p><strong>Heather Taylor:<\/strong>&nbsp;I&nbsp;didn&#8217;t&nbsp;&#8211;&nbsp;I knew that you had environmental&nbsp;&#8211;&nbsp;I honestly, this is like this shows my ignorance, really. But I knew that there&nbsp;was&nbsp;environmental consequences to AI. I did not know it was this deep or where the consequences were being worst felt, which is horrible.&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah. And, you know,&nbsp;it&#8217;s&nbsp;utterly bizarre that a lot of data centres are being built in some of the most arid parts of&nbsp;The&nbsp;US, which are already water stressed. So aside from the massive ecological consequences of drawing down further freshwater, diverting it, I&nbsp;don&#8217;t&nbsp;know. What has happened historically in The US when&nbsp;you&#8217;ve&nbsp;had water diversion for industry, especially&nbsp;agro&nbsp;industry, is massive fire risk. Right?&nbsp;&nbsp;We&#8217;ve&nbsp;all seen what happens to California in the summer now. And now loads of data centres are being built across the arid parts of California, and they will not work. They will catch fire and shut down, and the models will stop working if&nbsp;they&#8217;re&nbsp;not cooled with vast quantities of fresh water. Right?&nbsp;So&nbsp;the guys building them have every intention of cooling them down with vast quantities of fresh water.&nbsp;<\/p>\n\n\n\n<p>You&nbsp;don&#8217;t&nbsp;spend $500,000,000,000 on a data centre package if you are comfortable with it melting straight away. Right? So there is a genuinely huge ecological threat and the livelihood threat associated with that, which almost always lands on the most marginalised communities because, you know, when people dump massively polluting and water stress creating&nbsp;industries, they don&#8217;t usually do it in the most affluent neighbourhoods, right, because those people are well organised and well networked and everything. And,&nbsp;yeah, this is a serious part of it. And I think that the rush to scale that has seen this sort,&nbsp;of everyone&#8217;s just seems to have accepted that the only AI future is the&nbsp;one&nbsp;that we&#8217;re allowing Zuckerberg and Altman and people to lay out for us, which is 3 or 4 giant companies compete to buy up all of the world&#8217;s NVIDIA chips and create more and more data centres.&nbsp;Right?&nbsp;Maybe there&nbsp;are things that AI can do differently that&nbsp;don&#8217;t&nbsp;require this&nbsp;more and more bigger&nbsp;and&nbsp;bigger scale operation. But if&nbsp;that&#8217;s&nbsp;the path&nbsp;we&#8217;re&nbsp;going down, right,&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;And&nbsp;there&#8217;s&nbsp;already an energy use problem, though, isn&#8217;t there?&nbsp;There&#8217;s&nbsp;if&nbsp;there&#8217;s&nbsp;already an energy use problem, you know, so&nbsp;we&#8217;re&nbsp;kind of using&nbsp;energy for something we&nbsp;don&#8217;t&nbsp;need because we&nbsp;didn&#8217;t&nbsp;have it a little while ago.&nbsp;So&nbsp;it&#8217;s&nbsp;not something that we need.&nbsp;You know?&nbsp;Yeah. So even before we think about making it bigger, the fact that&nbsp;it&#8217;s&nbsp;even in existence now is a quite a concern.&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;And this is also why I find this sort of inevitability frame that people use to talk about this so troubling. Right? Whenever someone uses the framing of inevitability to talk about&nbsp;a new technology,&nbsp;they&#8217;ve&nbsp;got an interest in it. Right? Because it hurries things up and it presents people who have questions as, you know, just getting in the way because this is going to happen, right?&nbsp;&nbsp;So&nbsp;get on board. And this is explicitly what&nbsp;we&#8217;re&nbsp;hearing from our local MP, our science technology administrator.&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;That&#8217;s what I thought, and&nbsp;I&#8217;ve&nbsp;not got any stakes in it.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Things are made to be inevitable because powerful actors tell you&nbsp;they&#8217;re&nbsp;inevitable.&nbsp;There&#8217;s&nbsp;nothing inherently inevitable about this.&nbsp;&nbsp;Most people&nbsp;don&#8217;t&nbsp;even know what it can do when it works, and yet&nbsp;we&#8217;re&nbsp;accepting&nbsp;it&#8217;s&nbsp;inevitable. Right? I think&nbsp;that&#8217;s&nbsp;<\/p>\n\n\n\n<p><strong>Wendy&nbsp;Garnham:<\/strong>&nbsp;Is there another side to the inevitability, though, which is that&nbsp;it&#8217;ll&nbsp;eventually fold in on itself because eventually&nbsp;you&#8217;ll&nbsp;be feeding the machines with information that it itself has generated, I mean, is that a possibility?&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Possibly. I mean, the Internet is already so full of slop. Right? Just AI generated garbage. And there are examples of that.<strong>&nbsp;&nbsp;&nbsp;<\/strong>I think I&nbsp;can&#8217;t&nbsp;remember who it was earlier in the discussion. I was talking about translation. Right? And some of the folks at Distributed AI, research&nbsp;institution, Timnit Gebru and colleagues, looked at these claims that Meta and others have made about massive natural language processing models that could translate 200 languages. Right?&nbsp;<\/p>\n\n\n\n<p>And they looked at a series subset of African languages, which the research team spoke, and they found that it was&nbsp;really bad. Right? But it was also bad because it had been trained on websites that were translated by Google Translate. Right? And then so when you gave it vernacular, or vernacular twee, it was just absolute rubbish came out. Right? So&nbsp;that&#8217;s&nbsp;already happening. And then you think, well, what was that? Like, what is it for?&nbsp;Just the kind of like you were talking about&nbsp;neophilia. Right? People want new things. People want to move fast and break things&nbsp;&#8211;&nbsp;but why? What benefit is this going to bring us and is it worth it?&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong>\u00a0De\u00a0Beaudrap\u00a0(50:40):<\/strong>\u00a0Yeah. Yeah.\u00a0It&#8217;s\u00a0the slogan of a particular company to move fast and break things, and they had reasons for wanting to do that.\u00a0It&#8217;s\u00a0because it made them money.\u00a0<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah.&nbsp;Yeah. And speaking about those companies, you know,&nbsp;you&#8217;re&nbsp;saying this is a recent thing. A few years ago, the Silicon Valley giants were presenting themselves as green. Right?&nbsp;&nbsp;Go back a couple of decades. Google&#8217;s slogan was Don&#8217;t Be Evil, which I think just, you know, became funny after a while. But, like, you know, a few years ago, Microsoft was promising they&nbsp;weren&#8217;t&nbsp;only going to be, like, carbon neutral.&nbsp;They&#8217;re&nbsp;going to become a negative. Right?&nbsp;&nbsp;Really leaning into renewable energy, carbon&nbsp;capture&nbsp;and storage, which is a whole other story about how it may not actually work. But, you know, there was this sort of green vibe they were going for. After the kind of boom&nbsp;in ,&nbsp;LLMs from the ChatGPT 3 launch, they&#8217;ve all just chucked their carbon neutral policies, right, straight out the window and back&nbsp;to coal, back to gas because we need those data centres. Right? Is this&nbsp;a good time&nbsp;to be doing that?&nbsp;&nbsp;When this is of unclear utility to us, and potentially poses a threat to jobs&nbsp;and&nbsp;can\u2019t&nbsp;do half of the things&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;By the way,&nbsp;it&#8217;s&nbsp;boiling in here.&nbsp;<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Yeah.&nbsp;Yeah. So, you know, can we help students use these tools in ways that are ethical? Well,&nbsp;we&#8217;ve&nbsp;got to ask, are&nbsp;these good&nbsp;for our students? Right?&nbsp;&nbsp;Can we&nbsp;actually evidence&nbsp;that, not assume it because someone has told us&nbsp;it&#8217;s&nbsp;inevitable? And once&nbsp;we&#8217;ve&nbsp;figured that out, is it worth it? Right?&nbsp;There&#8217;s&nbsp;a whole bunch of things we can all do that make our lives easier. Are they all worth it?&nbsp;<\/p>\n\n\n\n<p><strong>Wendy Garnham (52:15):<\/strong>\u00a0<br>So that brings us to our last question. So, Jacqueline,\u00a0I&#8217;m\u00a0going to direct this to you first. Obviously, educators will have varying views on AI in higher education. But for now, it is something that we all must contend with.\u00a0So\u00a0with that in mind, what advice would you give to colleagues in terms of AI in higher education?\u00a0<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0Apart from sort of acknowledging that it&#8217;s there and maybe sort of addressing students and encouraging colleagues to address the students to sort of talk about AI and the things that it purports to offer and how it might fall short, the main thing that I would do is basically to not use it yourself, to not feed into it. Like, the more that we try to use these tools in order to cut corners and things like this in our own work, not only is it providing a bad model for the students, like, I think that students can tell when you put in some efforts into anything from designing a script for your lectures or your slide deck or a diagram or something like this. If they see it\u00a0modelled\u00a0for them that it is quite normal to just ask a computer to spit something out according to some spec,\u00a0they&#8217;re\u00a0just going to think of it as a thing that one should do.\u00a0It&#8217;s\u00a0a thing that I systematically\u00a0don&#8217;t\u00a0do.\u00a0I&#8217;ve\u00a0never used any of these tools because\u00a0I&#8217;m\u00a0not interested in feeding the machine that I think is going to undermine the things that I care about.\u00a0I\u00a0would\u00a0encourage colleagues to ask themselves if they\u00a0actually want\u00a0to be using a machine\u00a0that&#8217;s\u00a0going to have this sort of effect.\u00a0<\/p>\n\n\n\n<p><strong>Wendy Garnham:<\/strong>&nbsp;<br>Right. Same question to you, Paul. What advice would you give to colleagues in terms of AI in higher education?&nbsp;<\/p>\n\n\n\n<p><strong>Paul\u00a0Gilbert:<\/strong>\u00a0<br>I think my answer is quite similar to\u00a0Jacqueline&#8217;s. I fully agree. You know,\u00a0don&#8217;t\u00a0use it and then expect your students not to.\u00a0Yeah. Come on.\u00a0I have used it twice to show my second years how terrible the answer to one of the assignments would be if they asked ChatGPT and why they are better than it. Right? And I think what we can do, as well as\u00a0kind of getting\u00a0our students to think about the data infrastructures behind this, its limitations, what it\u00a0can&#8217;t\u00a0tell them, how not smart it is, how destructive it can be. We can also just, I think, put more work into highlighting for our students how much better they are than these models. Right?\u00a0There&#8217;s all this discussion of, like, oh, you know,\u00a0we&#8217;re\u00a0going to replace, like,\u00a0lawyers and\u00a0radiologists\u00a0and stuff.\u00a0And, you know,\u00a0I&#8217;m\u00a0not sure how true that all is, you know, if you\u00a0replace all the junior lawyers so no-one\u00a0has to\u00a0read through\u00a0court documents\u00a0who&#8217;s\u00a0in\u00a0five\u00a0years going to be a senior lawyer. Right? You\u00a0gotta\u00a0have the foundations. Right?\u00a0<\/p>\n\n\n\n<p>So, again,&nbsp;there&#8217;s&nbsp;a lot of inevitability frame and hype, which&nbsp;I think we&nbsp;need to cut through. But, also, like, we have a lot to offer in higher education to our students, and they have a lot to offer us that cannot be replicated, reproduced, or displaced by ChatGPT and finding space in the classroom to emphasise that. Right? Even if that is explicitly saying, look, like, this is garbage.&nbsp;It&#8217;s&nbsp;powerful and&nbsp;it&#8217;s&nbsp;big and&nbsp;it&#8217;s&nbsp;fast and it looks shiny, but&nbsp;you guys&nbsp;are smarter than it.&nbsp;Right? And&nbsp;I&#8217;m&nbsp;not just saying that genuinely, like, my students produce better stuff than AI could, and I think&nbsp;that&#8217;s&nbsp;true for a lot of us, right? And we need to give them that, like, trust and meet them on that terrain rather than assuming&nbsp;they&#8217;re&nbsp;all, you know, just itching to fake their essays.&nbsp;Yeah. And then, you know, some people will, and&nbsp;that&#8217;s&nbsp;always been the case, and, you know, it will continue ever thus, right?&nbsp;There&#8217;s always been plagiarism and&nbsp;personation. We&nbsp;can&#8217;t&nbsp;stop it, right? But we can, highlight the things that AI can never do and encourage our students to value that in themselves.&nbsp;<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0There&#8217;s something that I could\u00a0add, by the way. So there&#8217;s something that I&#8217;ve been thinking about that I\u00a0&#8211;\u00a0it feels a bit weird to\u00a0\u2018yes and\u2019\u00a0what Paul was saying about the ecological and the sociological impact, which as far as I&#8217;m concerned, should be the sort of the conversation stopper in terms of the ethical use of AI, but about the notion that these are stillborn gods that we can just work to improve them. Well, I mean, this is a part of the\u00a0technophilia,\u00a0sort of the\u00a0technophilic impulse in the computer\u00a0industry generally. We can sort of look at the other things that happened at scale.\u00a0For example, Moore&#8217;s Law,\u00a0where computing power became\u00a0larger and larger, computers became\u00a0faster and faster. This\u00a0didn&#8217;t\u00a0always make our software better. In fact, it made it worse in a lot of respects because people stopped valuing writing code well. So even as things scale up, this\u00a0isn&#8217;t\u00a0going to be a guarantee of an improvement in quality. In fact, if the past is any\u00a0indication, things will get worse.\u00a0So even after burning the world, we may not have anything even particularly nice to show for it.\u00a0<\/p>\n\n\n\n<p><strong>Wendy Garnham:<\/strong>&nbsp;Simon, as a learning technologist, do you want to add anything before we close our podcast? The role of AI in higher education.&nbsp;<\/p>\n\n\n\n<p><strong>Simon&nbsp;Overton&nbsp;(57:34):<\/strong>&nbsp;I think, the only thing that I would say that I feel is perhaps a little bit hopeful, and it&#8217;s not just limited to education, is that I believe that the use of AI and the proliferation of slop, great band name, by the way, is going to lead us to value things that are real a lot more. When I was at university, I really loved&nbsp;the essay,&nbsp;The&nbsp;Age of the&nbsp;Work of&nbsp;Art in the&nbsp;Age of&nbsp;Mechanical&nbsp;Reproduction, I think it&#8217;s Walter Benjamin, which was that people were worried that if we could produce posters of, you know, the Van Gogh&nbsp;Sunflowers that we wouldn&#8217;t value the original anymore. But&nbsp;that&#8217;s&nbsp;not what happened. It&nbsp;actually became&nbsp;more and more&nbsp;valuable.&nbsp;So I think and I hope and I believe that it&#8217;s all quite new and quite scary for us now, but I think that it will encourage us to value the things that Paul said just now that we can have and that can come out of the time and the relationships that we establish it in higher&nbsp;education.&nbsp;So&nbsp;and I think that&nbsp;ultimately&nbsp;that&#8217;s&nbsp;probably a&nbsp;good thing even though it looks&nbsp;kind&nbsp;of&nbsp;scary.&nbsp;<\/p>\n\n\n\n<p><strong>Heather\u00a0Taylor:<\/strong>\u00a0I would like to thank our guests,\u00a0Jacqueline\u00a0and Paul.\u00a0<\/p>\n\n\n\n<p><strong><strong>Jacqueline<\/strong> De\u00a0Beaudrap:<\/strong>\u00a0Thank you.\u00a0<\/p>\n\n\n\n<p><strong>Paul&nbsp;Gilbert:<\/strong>&nbsp;Thank you.&nbsp;<\/p>\n\n\n\n<p><strong>Heather&nbsp;Taylor:<\/strong>&nbsp;And thank you for listening. Goodbye. This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes, as well as articles, blogs, case studies, and infographics, please visit&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/blogs.sussex.ac.uk\/learning-matters\/\" target=\"_blank\">blogs.sussex.ac.uk\/learning-matters<\/a>.&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by&nbsp;Prof Wendy Garnham&nbsp;and&nbsp;Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme.<span class=\"ellipsis\">&hellip;<\/span><\/p>\n<div class=\"read-more\"><a href=\"https:\/\/blogs.sussex.ac.uk\/learning-matters\/2026\/01\/30\/episode-13-generative-ai-and-higher-education\/\">Read more &#8250;<\/a><\/div>\n<p><!-- end of .read-more --><\/p>\n","protected":false},"author":343,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"spay_email":""},"categories":[98307],"tags":[123704],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/posts\/1616"}],"collection":[{"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/users\/343"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/comments?post=1616"}],"version-history":[{"count":4,"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/posts\/1616\/revisions"}],"predecessor-version":[{"id":1659,"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/posts\/1616\/revisions\/1659"}],"wp:attachment":[{"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/media?parent=1616"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/categories?post=1616"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.sussex.ac.uk\/learning-matters\/wp-json\/wp\/v2\/tags?post=1616"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}