
The Learning Matters Podcast captures insights into, experiences of, and conversations around education at the University of Sussex. The podcast is hosted by Prof Wendy Garnham and Dr Heather Taylor. It is recorded monthly, and each month is centred around a particular theme. The theme of our twelfth episode is ‘talking to students about AI’ and we hear from Professor Thomas Ormerod and Professor Carol Alexander.
Recording
Listen to the recording of Episode 12 on Spotify
Transcript
Wendy Garnham & Heather Taylor:
Welcome to the Learning Matters podcast from the University of Sussex, where we capture insights, experiences, and conversations around education at our institution and beyond. Our theme for this episode is Talking to Students about Generative AI. And our guests are Professor Carol Alexander, Professor of Finance, and Professor Thomas Ormerod, Professor of Psychology. Our names are Wendy Garnham and Heather Taylor, and we are your presenters today. Welcome everyone.
Heather Taylor:
Carol, how are you currently talking to students about generative AI, and what kinds of support or guidance do you provide to help them use it responsibly?
Carol Alexander:
Well, I teach two modules this term, one graduate and one 3rd year undergraduate. And the 3rd year undergraduate, using generative AI is one of the learning outcomes. I teach them how to use ChatGPT or Claude. I prefer not to use Claude. It’s not great for finance. For finance ChatGPT is clearly the best. I teach them ChatGPT in workshops, at least the chap who’s doing the workshops does. We design the workshops so that, at the beginning, they understand the importance of context engineering. It’s setting your project instructions well, at least setting your basic personalisation, and then project instructions, keeping things tidy with projects, managing, so called memory. It’s not actually memory because every time you send a prompt, the entire previous conversation is attached to it. So realising that stuff early in the conversation can get lost and the importance of recapping that, telling it to forget certain things so that this context window, which has a certain size, it has a certain number of tokens – tokens here means bits of words and so forth. You might put in a very inefficient prompt, which is a few words long, and then it responds with 10 pages because it doesn’t really know what you want. And that is what you don’t want to do. You want to engineer each prompt you use to get an efficient output, not just efficient from the point of view of the students’ learning, but also from the point of view of energy consumption as well. We go through all that, and then in the context of the financial risk management module, they do their workshops. I mean, not always using ChatGPT, but there are specialist GPTs, there are XLAI, and various other things. I’ve built a GPT called a Socratic learning scaffold, which helps them to learn anything new. And then I just set the project, which is 70% of the assessment for this. You know, what used to take days now takes hours because I use my Socratic learning scaffold and it doesn’t have to be Socratic, but that’s just, you know, getting to understand a bit of philosophy. Anyway, so they must think out of the box whereas in the past, I would not dare to set a project exercise that they hadn’t already experienced something very similar of in workshops or lectures. But now I’m setting them something that they’ve not actually done before. It’s associated, but it’s a little bit more advanced than we’ve done because they can teach themselves to do that, and that’s the first step of this project.
Wendy Garnham:
Sounds as though it’s like fostering creativity. And the idea of sort of getting them to take what they’ve learned before, but to build on it and do new things with some of that learning.
Carol Alexander:
Well, I mean, the project starts off with a paragraph, are you interview ready? Because they’re third years. And the job market out there, of course, we can get on to talking about that later if you like, but, I mean, it’s awful for white collar jobs and particularly, you know, finance in the City. Graduate roles are being replaced with AI. One person can do the output of five now. But, if they are competent with playing this piano, not just having the piano, but making a very nice melody from the piano and knowing how to tune it for whatever they’re going to play, they stand a much better chance of getting. And this project itself, I hope, will be attached to their CV, and in some way, they can, they can apply for jobs.
Heather Taylor:
Yeah. Amazing. Do they do they like doing it? Have you had good feedback?
Carol Alexander:
Because I record all the lectures for my sins. Although, I’m going to upload them to update my YouTube channel because that’s very out of date. I did that in COVID. So it’s now, like, five years out of date. But I did that for the same module five years ago. And so now I do for every lecture, two-hour lecture, there’s six videos prerecorded. And I go in there after Tom on a Monday morning, we share the same lecture room. And I just stand there and say, okay. You know, I put up a Word Cloud on PollEv and say, what do you want me to talk about? And everything is totally interactive. And I may look at lecture notes, so I very often just scribble. And it’s like the old-fashioned way of writing on the blackboard is back, although I do it on the overhead projector. And I’m gauging the engagement this year, and I haven’t seen anything like this for years now, particularly with Gen Z. You know? But there’s eye to eye contact. Very few people are looking at the phones because it’s interactive, and they’re getting what they want. They’re asking the questions.
Wendy Garnham:
Music to my ears as an active learning fan.
Heather Taylor (05:54):
So, Tom, same question to you. How are you currently talking to students about generative AI, and what kinds of support or guidance do you provide to help them use it responsibly?
Thomas Ormerod:
So, we’re in a very different place today. A very different place indeed. I would say to many of my colleagues, and I apologise if I’m putting speaking out of place, it’s still a threat, not an opportunity. I see it as a huge opportunity, and I’m just trying to work with the students so that they feel that too. So, I teach the final year module as well, just before Carol comes on with hers. And I am in a position where I’m talking to our students about the positives of using generative AI. And I say to them, it would be irresponsible of us to let you leave university without skilling you to some extent in the use of what must be the most powerful intellectual tool since the invention of the computer.
As a school, though, I think we are incredibly slow to pick up on this. We don’t teach them how to use generative AI. I teach them what not to do. I teach them that it’s really good for formatting. It’s really good for all those boring things like, you know, getting the right APA formats, which we seem obsessed about in in psychology. It’s really good for data cleaning, data processing, data handling generally. It’s really good for putting in an essay and saying tell me what you think about this. It’s really good for the things like ‘I’ve written something, can you bullet point it so I can see my own structure.’ It’s really good for having a conversation with, and that’s where we need to teach them how to have a conversation. And the one thing I say that you don’t want to do is ask it to generate your content for you, because we’re very much still essay based, text based, produce a reasoned argument about. I kind of am setting ground rules in my teaching for saying you really should use generative AI. These are the good things to do, and here’s something you just don’t do. What I don’t do is I say, because this is academic malpractice. I mean, it is if you look at the regulations, but frankly, we’re not going to be able to tell or at least prove that people have written generated material using generative AI. We can often tell, but we can’t prove it. But what I say to my students is two things. First, if you use generative AI to generate your essays, you’ll probably get a 2:2 because it will be alright, but it won’t be great. More importantly, when you are a forensic psychologist sitting in a prison on your first day under supervision, you aren’t allowed a computer in a prison. You must know the stuff. You must think about the stuff. That’s the moment you’ll wish you’d use generative AI in the right way. One of the things I’ve started to do is put in assessments that, encourage them to think intellectually about the material.
I have this thing where I get them to write a monograph chapter, where they get four papers and they have to write a chapter that would go in a monograph, that, integrates across those four themes. And then the one thing I put into this, you see, I say you must have an illustrative figure. You must have some picture that shows the point you’re trying to make about those four papers. ChatGPT at its best is bad at that. It’s bad at doing that, and the students really struggle with it as well because it’s one of the first times they’ve been challenged to be truly creative intellectually, in an essay format. I think we’re finding ways where we can use generative AI positively, that free us from the traditional essay format. And I think that’s my takeaway message to myself is rethink the assessment to fit the new tools we have.
Heather Taylor:
Yeah. I love that. I love the idea of the assessment. The point you made about, don’t get them to write, you know, encouraging them, not to use it to generate their content. And I think it’s possible they could get a 2:2, you know, but it depends on the topic, I guess. I sometimes ask AI when I’m bored in the morning, and I’m the like, no-one else gets up when I get up, so it’s just me in the lonely world at 5am and I ask it questions about things I know a lot about, and it gives me some nonsense back, like some real nonsense that would go against anything I would teach my students.
Thomas Ormerod:
Yes. I have a lovely example of this. You may have come across this example before, but I don’t know if it’s still true. But it certainly was true a couple of months ago. If you type in to ChatGPT, draw me a beautiful room, a picture of beautiful room without an elephant in it, it draws you a picture of a room with an elephant in it. And it does that because it’s being clever. It knows the expression elephant in the room, so it thinks, you know, the elephant should be in the room because of it, etc. Why would you say do it without an elephant in it? If you say draw a room without a pig in it, there’s no pig in it. So that’s the way you can test that. Now I use this as a teaching aid with my students because I say to them, you can all see that that’s quite funny. Right? And it’s that it’s not doing what you asked. What if you didn’t know what an elephant was? When you generate content, you don’t know that content. So how do you know it’s right? How do you know it’s not put in material that’s incorrect? How do you know it hasn’t put an elephant in your essay? I don’t know how powerful that is as an illustration, but they all go ooh.
Heather Taylor (11:32):
You’re right. And I’ve asked it to do a reconfiguration floor plan in my house before where I moved the stairs so I could fit a bathroom in upstairs, and I told it all of this. I could probably get some tips from you on how to more efficiently talk to AI, to be fair. But it gave me a floor plan. It was 3D, which isn’t what I wanted. But, anyway, the stairs were there, the bathroom was there, but the bathroom was downstairs, and the stairs ended in the bath. Yeah. And I was like, I won’t try this.
Thomas Ormerod:
And did you do it?
Heather Taylor:
Exactly. It was great. Yeah. But, yeah, it was just like, it just decided how can I fit in what she’s asking me for into this tiny space, which is my house, and it did do it, but in a completely unliveable way because it doesn’t live in a house, does it? It doesn’t know. So yeah.
Wendy Garnham:
But it does it with papers as well because I’ve done it with the students where we show them, like, a paper. We ask it to summarise a paper, and it tells you completely the wrong information. It pulls together ideas from other papers and it created this story.
Carol Alexander:
Yes. But you can control that, this hallucination. You can set up a project where the domain that it looks at is only the lecture notes. And then you can, do your prompt more efficiently. You can use the code words step by step, for example. And, also, my personalisation, which are the base level instructions, are always ask a clarifying question at the beginning of a conversation.
Wendy Garnham:
Yeah. I think without that input though, if students are just, certainly in foundation year, if students think we’ll just go to ChatGPT they’re putting in the essay question or, you know, the blog post titles, whatever, and seeing what comes up. And they do tend to just think that’s a useful summary or, you know, that’s really sort of saved me ploughing through. We use that as a sort of starting point to say don’t be tempted just to go away and just, like, put this question into ChatGPT because you cannot guarantee that what comes out is going to be useful, helpful, whether it’s going to be accurate. So that’s sort of how we start that whole story. But then, of course, we do our active essay writing, which we developed with Tom, where we sort of get them to start with their own ideas first. So just, you know, what occurs to you when you see this as a title? You know, what are the ideas you’ve got? What questions do you have about it? You know, so try and have confidence in your own ideas first before you turn to anything else.
Carol Alexander:
And I was just thinking, you know, we’re in a transition period. Soon, the students coming into university will be better than we are at prompt engineering. You know, it’s as I said, it’s like a piano or like a car. Just because you got one doesn’t mean, say, you’re not going to drive it and have a crash. But, this young generation are going to be growing up with it, and they’re going to be absolute masters at playing it like they are at video games. So only over the next few years have we got to educate at university level. By that time, we just gotta cope with the few students that actually want to come to university at that point being better at using ChatGPT than us. What value have we got to offer? Because you can teach yourself so quickly with ChatGPT or any LLM. Particularly Chat GPT is so much better than the others.
Wendy Garnham (15:06):
That brings us nicely to our next question. I’m going to ask this to you, Tom, first. In your discipline, where do you see generative AI offering the greatest value and what are the key risks or limitations?
Thomas Ormerod:
I think it could, if done correctly, be quite democratising because now there’s been in the last 10 to 15 years a huge change in the skills requirement for the behavioural sciences. In my day you just went to the pub recruited a few people to do little puzzles, you wrote down what they said and then you published it. It was easy. These days with the Open Science Movement which has basically responded to the fact that so much of psychological research just doesn’t replicate because it was done by people like me. We have a much stricter approach to methodology and so there’s a whole, there’s reams of stuff around pre-registration, there’s a use of the R environment for constructing your analyses. There’s a whole new range of statistical techniques. There’s a whole new range of qualitative techniques. I mean wow if you open up the mind of thematic analysis you find there’s a new thematic analysis method for every researcher in the world as far as I can tell.
And it’s actually meaning that a small subset of researchers are gaining access to journals, gaining access to grants because we can’t reskill ourselves quick enough. So one of the promises I think of generative AI is it can take an awful lot of the legwork out of that and allow us to do the interesting hard thinking that we want to do properly supported by tools that say, hey, you don’t actually have to learn all these funny little commands in R that you always get wrong because you don’t put a full stop or a capital in the right place. We’ll do that. You think about great things about how the mind works. That’s my fantasy for what we should be doing in psychology as a discipline, is saying, let’s use generative AI to help us think deeper thoughts. And let’s use generative AI to help us tackle questions like how do you do belief revision? Now we’re so obsessed in the methodology of our experiments. We don’t spend a lot of time thinking about how people change their minds. And that’s the level we ought to be working at as scientists. But instead, we’re working at the level of how can I get my code to work?
Carol Alexander:
Yeah. Well, coding is a big, plus for, in finance, in the you know, if we’re training students to go and work in the industry, the job of a developer, you know, writing Python code and things like that is more or less, non-existent now because LLMs write perfect Python code. You just need to know how to ask the right questions to get the right code out. But I think the advantages in finance is, you have to distinguish between the advantages for educational purposes, and the advantages for research. And on the side of advantages for research, you know, it can mathematical proofs, so basically what we do is we get mathematical models of financial markets or financial institutions, and we get data, and then we apply the mathematical model to the data, and then we see whether our hypotheses about where things should happen are true, with some statistical analysis on the data that is and the models that we do the statistical tests on are derived from our basic formulation. And, sometimes it’s quite difficult to get from that basic formulation to a form of the model that you can actually test. But now, I mean, it’ll even write the basic formulation for me. So, you know, deriving an economic model of behaviour in financial markets is easy. Anyway, so that’s on the research side. But on the student side, it’s still the same because, of course, we’re educating them to do what we do in the profession, what we do in research, what we do in the profession, which is build mathematical models with some data.
So, you know, as I said, there were these specialist GPTs, generative pretrained transformers, a particular type of LLM. There’s a whole library of these in, the OpenAI product. There are also libraries of similar sort of things in Claude or Perplexity and things like that, but they’re much, much more advanced. They these things have been produced for years, and there’s now, I don’t know, 20, 120, 200,000 of them. As I said, I made my own, so if I can do it, I’m sure loads of other people are. The most popular one is generating your own horoscope. And the second one is semantic scholar. It’s sort of helping you write essays and things like that. But there’s a lot of Excel AI, for example. You upload some data. You just – I don’t even write now because the dictation is so good on ChatGPT. All my ums and ahs, and ‘hang on. I got that wrong’. And I just have a stream of consciousness, which then, even before it’s translated and I press send into the prompt, all the ums and ahs and the corrections I made magically disappear from the translation. It’s extraordinary. I use that for writing emails. I just copy and paste it and put it in a prompt. Anyway, so I just have a bit of a mind dump and then put it into one of these Excel AIs. And at first, they use Python, then they just put it in an Excel spreadsheet. But then now, they click to view the formula in Excel.
Thomas Ormerod (20:49):
Yes. I did a similar thing just yesterday morning. I was prompted by Claude, it said you’re doing this inefficiently why don’t I build you an app? And it built me an app to do it. I had to process 40 different spreadsheets. It just said I could do this one by one for you as you seem to be thinking, but why don’t I build you an app? And then it took the task down from a day to half an hour.
Carol Alexander:
The thing about Claude is it just creates these artifacts all over the place, and you can’t relabel your chats, and you can’t delete them. And what you have to do is put them all in a project and delete the whole project if you want to tidy up some of those chats. But you can’t relabel them. When you’re looking back, you’ve got no proper index to know what you’ve done in the past. But then, for me, it was just hundreds of artifacts, most of which were useless, were being produced. This is one example of an artifact that was useful. But had you gone direct to the artifact build rather than let it do it in a chat, then you could have had a little bit more control over that article. You could have named it.
Thomas Ormerod:
I shall do that in future.
Wendy Garnham:
So that seems to relate more to sort of the value. How about the risks or the limitations? I’ll ask Carol first. What do you see as being the risks or limitations?
Carol Alexander:
Well, I mean, the singularity. You know? Artificial general intelligence, where the point where they get more intelligence than us in, whatever use, more powerful than us. I mean, I was just watching this podcast by what’s his name? I’ll just find his name – the diary of a CEO? He’s just done one with Roman Yampolskiy, I think. And he wrote lots of books. He’s completely mad as a hatter, I have to say. He believes, 100% that we are living in a simulation run by superior agents, you know. He sort of takes this idea of simulations to the extreme. And I mean, it’s true that there are some wonderful simulations. Apparently, Google 3, Google Earth 3, you can put yourself in the Grand Canyon and steer yourself or up the Amazon River. Steer your own route. And you can just imagine, when that becomes a 2-player game, you can be steering yourself up your Amazon river, and then your friend decides to put a Zulu tribe there or whatever. Not a Zulu, but you know what I mean. So you can sort of if you take that to the extreme, then yeah this simulation theory has some credibility. But no. I mean, I think what’s going to be happening is just as we become cyborgs more and more, right, I’m attached to my phone, and I do worry I lose it. I would much rather have it embedded in my arm, thank you very much. And other stuff like ChatGPT. I wouldn’t mind having that, as a sort of earpiece. You know, the Google glasses as well. We’re getting more and more like cyborgs. And then on the bioengineering side, you can imagine that, you know, it’s progress so rapid that human DNA or some brain cells could be put into a humanized robot. They’re getting quite agile now, although the Tesla one’s completely empty. But in Japan, you know, they’re getting some good robots. And so, you know, what we’re looking at is very high unemployment apart from plumbers, but maybe these robots could even be plumbers. But, you know, we’ve got a huge demand for construction, particularly if we’re going to go to war, usual thing that’s you know the economy rebounds because of building things for war.
And we’ve been struggling since 2008 because of the bankers, and the bankers aren’t going away, although that’s another story. Anyway so, yeah, there’s quite a lot of risk. I wouldn’t take it to that extreme, but I do see large unemployment for a while as jobs change. So white collar jobs are going to be few and far between. Certain jobs like hairdressers. You know, plumbers cost so much and builders in Brighton are awful. You know, I mean, we should start offering proper apprentice degrees in some universities, not all our degrees, but some universities like Sussex, I think, would be ideal for that, and particularly since Brighton University used to be one of these wonderful technical colleges until Thatcher shut them all down and called them universities. We need to go back to that world where we are properly training, construction people, and then it wouldn’t cost an arm and a leg because there’d be a greater supply.
Thomas Ormerod (25:58):
I think Carol’s right to point to potentially dystopian future. If you think about it from an educational context, I think the big takeaway message there is fear. I think that because we’re learning the implications of the new technology, there’s likely to be levels of fear, amongst faculty and students around what what’s going to happen. What will my degree mean? How am I going to assess people anymore if I use essays that they can write on the computers? And I think that will be a transitionary phase, we hope. But I think we have to recognise that because I think that the anxiety that it’ll raise will be quite considerable.
And I think that’s why this is a useful session, I think, because I think that the takeaway message from both Carol and I would be that there is more to be discovered that is useful than we should be worrying about. Yes, we should worry, but we should worry positively. Okay. If we’re going to change the world of work, what are we going to change it to? Carol’s example is let’s look at the construction trade, for example. When I was doing AI research back in the 1970s, I was only about 5 at the time. That’s not true. The big worry was that robots would take over all the blue-collar jobs that, you know, because we could have a vacuum cleaner that could sweep a house, that you could have a vacuum cleaner that could sweep the roads. You could do all the blue-collar jobs, exactly the opposite has happened. And it’s the white-collar jobs that are under threat. We must reconceive the world of work. We must think, what do we get people to do? In the same way that when the typing pool disappeared with the word processor, we had to reconceive. Well, what does what does a PA do now? Apart from, you know, buy the chairman’s birthday present for them. That we had to rethink what those roles are about. And I think that if universities have a place, it’s to do that job. We need to rethink the future. And as part of our curriculum, we need to be sending our students away realising that that’s their job is to rethink the futures of the trades they would be going into, not just come up with a bunch of skills that allow them to do what was good in 1973.
Wendy Garnham:
I think that also changes that idea of anxiety about ChatGPT and AI and where it’s heading, because I think you have to have a level of anxiety in order to be able to think about how it’s changing and how you adapt and so on. I think for me the danger is if we don’t have that anxiety and we just ignore it or park it and we think that we can just carry on as we’ve always done because that’s just how we do things, I think you know, sometimes having that level of anxiety means actually we are thinking ahead and we are planning ahead. I think that can be seen.
Carol Alexander:
It’s the right level though. You don’t want to do much.
Wendy Garnham:
But you need some. Otherwise, you’re just stuck in a rut of doing things the way you’ve always done, where the advances are happening alongside and the two never connect. And I think, you know, that for me is sort of a danger that you have, you know, this sort of disconnect between – I know it’s happening but I’m not going to change anything I’m doing – whereas I think that level of anxiety sometimes can be a push towards thinking ahead.
Heather Taylor:
I was thinking, well, about what both of you were saying, but something you said earlier, Tom, about researchers. If we didn’t have to spend ages figuring out our code or whatever it happened to be, so the researchers could do the more interesting thinking. And I think when I’ve spoken to people before about ChatGPT, I’ve said we should be teaching students to do the things that it can’t do. But really, I think of that as sort of philosophical questions that have got nuance. I don’t really think ChatGPT thinks. I think it tells me stuff. I don’t really view it as thinking in the way a human thinks.
Thomas Ormerod:
Think that what Carol does is a good example of what we should be doing, which is not expecting ChatGPT to think, but letting ChatGPT help us to think. I think when I’ve used it successfully, most successfully, I’ve had a dialogue when we always seem to get kind of ratty with each other sometimes. And we can’t help but anthropomorphise when you use it. If you get into that dialogue state, and where you’re asking it to critique you, then you can get some really positive thoughts coming out of it, I think.
Carol Alexander (30:52):
I’m very interested in my emotional reactions to it because I iterate all the time. You know, I start off with a prompt and it gives something, and then I look at it, and then I read what I want to change, or what I don’t like, and I dictate and press play, and then it goes on. And then if it doesn’t succeed after about three iterations to progress, I get cross with it. And I say you’re rubbish. I don’t like 5. Give me 4.1. You know? Thank goodness I’ve still got access to 4.1. And so I do choose a model. But then there are other times, and I have to say that 5, ChatGPT 5 does whir too much. You know, it’s slow. It thinks too much, even the instant and the thinking mini. But, anyway, I do like 4.0. I like 4.1. It’s more of a workhorse, and I praise it. I say, ‘oh, gosh. I love you, 4.0. You’re so good. Do you know ChatGPT 5 is awful? I wish they were all like you.’
Do you know that over 60% of the prompts to LLMs worldwide are on the human relationship side, i.e. asking for companionship, just basic chat, or therapy? Over 60%. I got that figure a few weeks ago from one of the, you know, one of my Google feeds, which, of course, only feeds me what I want.
Heather Taylor:
That’s quite a concern. But also because sometimes I’ll ask it, so I know loads about obsessive compulsive disorder, and that’s what I’ll quiz ChatGPT on, to see how good it’s getting and if it’s learning from me. You know? And it used to be rubbish. It’s a bit better now. But my concern with people using it as therapy is that there are specific models that are used to treat specific disorders, even specific sub-symptoms within disorders or themes within disorders. And using a different therapy for a slightly different disorder or whatever it might happen to be could cause a lot of problems. And it concerns me when people use ChatGPT for therapeutic reasons because there’s advice even with self help books to, if you need extra support make sure you reach out for it. You stop doing it. Uncomfortable feelings can come up. ChatGPT could just be talking pure nonsense to someone that could be, you know, making their problems worse.
Carol Alexander:
There was a big law case in OpenAI. A young American chap killed himself. And then around the time of the GPT 5 update, they put a patch on so that now it’s so much more conservative. So, you know, it’s a bit like HR – HR finds something goes wrong, and then they issue all these rules that you’ve got, you know, adhered to in your behaviour in the company. But then there’s going to be some other things that somebody does, outside those rules. So they haven’t changed the basic way that ChatGPT operates. But the good thing about Claude and the reason I got a subscription for a month, which I’d used for about a week, and that’s because Claude is different. It’s not a GPT. It’s an LLM that’s built on a code of ethics, sort of governance principles. It’s almost like a constitution that ‘thou shalt not do anything that would help people build chemical weapons’, for example, were a recent change to its constitution. I thought it was a better way of building an LLM and less prone to having to have these bits of string and elastoplasts to keep it functioning. But then there were all this other stuff that I then cancelled.
Thomas Ormerod:
I’m just terrified about the idea of generative AI becoming like HR. It’s not the way I want to see it going.
Simon Overton (34:53):
I’ve got a slightly more positive take on AI and it relates, if I may, to the work of art in the age of mechanical reproduction, which was, I think, an essay by Benjamin? Walter Benjamin. Anyway, the point of that was everybody was worrying that with mechanical reproduction, that works of art like Van Gogh’s Sunflowers would become devalued because everybody could get a copy of it and stick it on their, you know, bedroom wall. But the opposite happened. It didn’t decrease the value of the Sunflowers. It increased it. And I feel that a similar thing is happening. And I would say that a slightly more up to date example would be, the production of music and how these days people really value, for example, the experience of having their music on vinyl. And when you look at, I do a lot of YouTube, but when you look at that and you have music producers and all sorts of other content creators on YouTube, they’re very keen to state and to say that what they do does not use AI. I wonder, and I hope that maybe it’s, rather than devaluing the things that we most like about being human or the things that we value, including perhaps a more traditional form of university education, I wonder whether it would increase the value of it and people would appreciate it more. And then that perhaps is the direction that universities need to go in, to think people could do a Matrix and download the content into their head, like Keanu Reeves. But what can we actually do? What can we provide that is so much more human and so much nicer than that? And I think and I hope that is the direction. I wonder if you two have any thoughts about that.
Thomas Ormerod:
I’ll start. I think that is an optimistic view, and I think there’s some truth in it. I think the problem that you face is the volume issue. So, it’s a bit like the finest jewellery is very, very, very expensive, very exclusive, and you know it’s the finest because of the hundreds of hours that have gone into it. You go to Ratners and you’re getting a very different product. One of the implications of what you’re saying is that a lot of the volume work, the soundtracks to adverts, that kind of thing, is going to be done by AI because why wouldn’t you? Because it doesn’t matter whether what the ownership of the creative act is there. Whereas your Taylor Swift music is still going to be written by humans because Taylor Swift is a quality product, can charge premium fees. It’s almost in a sense the opposite of the democratising of it. That you then end up with a situation where an elite can afford human generated materials, but the rest of us basically gotta go to Ratners.
Carol Alexander:
Yeah. I mean, imagine people might prefer to have human accountants they have a relationship with or human personal trainers or human therapists, but, what LLMs are beginning to offer is that. For people that could not afford that sort of thing, at least a basic level of accounting or people in prison that don’t get any therapy at all, at least, you know they may get some help, or people on the streets.
Thomas Ormerod:
On the positive side to come back to this though, there are interesting and curious opportunities. In fact I just bought the Internet address for ain’t.com – AI, no thanks. Because I was talking to people who were working in recruitment and they were saying one of the problems they’re getting is that companies want an AI-free product to differentiate themselves from the fact that so much CV processing, etc, is done by AI systems, which we know introduce huge biases, ethnic, racial, gender biases, because they’re taking all their content from the world outside and the world is biased, they’re reflecting those biases in what they do. Being able to signal that you are an AI-free product is a bit like sort of being able to say, you know, gluten free or ultra-processed free or whatever. I think there will be a market, but it’s a niche market. And I think that what we have to face, is the volume issue that AI does it better at volume.
Carol Alexander:
And recruiters are in a terrible situation now because all CVs are perfect. They’re all written by ChatGPT, and they all look the same. Some of them even use secret words, and they, even though they didn’t go to Oxford or Cambridge, they put in sort of white type Oxford In the footer and then they get through the algorithms that way.
Heather Taylor (39:56):
I think, based on what Simon was saying, I do get your point, You know, you could have a smaller quantity and a higher value product, and only a few people could afford it. But I feel like going back to what you said earlier, Carol, about how you will upload these videos and then you go into your lecture and go, what do you want me to talk about? And then they get to decide, and you said that improves engagement and so on. And I can see that it would. Okay? And that’s also very confident and brave of you to do it that way, and it’s great. To just be able to go in and go I’ll talk about whatever you like. I’d love to be able to do that, but I just waffle like I’m doing now. But I think maybe that’s the answer. I think maybe Simon’s point is almost that, we do need to know how to use AI, and I mean, like, we. I know you two know. Right? I don’t really know. I just argue with it a bit.
Carol Alexander:
You should watch my YouTube channel. It’s Professor Carol Alexander, and the playlist, because there’s some other ones there, is called a No Coders Guide to Surviving Generative AI.
Heather Taylor:
Okay. I am going to look. I think we need to know it so that we can know what it can do, so that we know what to offer students in terms of learning how to do this themselves. But also once they know how to do it better than us, like you’re saying they will, we need to know what are the things that are going to be valuable or they’re thinking that’s valuable for them that AI can support, but it can’t replace. And I think it’s almost like you could have, at volume, it’s trickier, but at volume, more handmade, more bespoke degrees so that if we didn’t have to teach like, Jennifer Mankin will kill me for this, but she’s in our department, but if we didn’t have to teach R, okay, if we didn’t have to have whole modules fill up with teaching them how to code, and if we didn’t have to teach – if I didn’t have to teach how they’re going to structure their research report, right, and I could just focus on their thoughts, their nuanced thoughts, their individual thoughts about things that they’re writing about in there, or I could get them to come up with a new research idea or whatever it happened to be. If we could move away from these things because AI took care of it for us, I feel like it would be a more engaging degree for them that could still be taught en masse. Okay? You know, you’d have to have people that are quite specialist in certain areas to be able to do that, like you would be with finance, you know, to be able to walk in and go ‘what do you want me to say?’
Carol Alexander (42:39):
How are you going to fill up an entire degree, and do you want to? If the jobs are not there, we’re not going to get students applying
Heather Taylor:
I didn’t think about it from that point. If the jobs aren’t available, the students aren’t going to come to uni for that reason.
Carol Alexander:
But if we if we append an apprenticeship where, it may be an electrician or whatever, you know, a plumber, as I said before. They can get jobs in that, but they want still to come to university as a transition between home and the real world, where they expand their mind. They expand their understanding.
When I was Sussex as an undergraduate, I was a science student. I did maths with experimental psychology, and we all had to take an optional art subject. I took witchcraft. But the point is that we you know, so some students that are in engineering, for example, and they may be, you know, civil engineering, they may still have jobs there, but a lot of their degree would be taken up by them just expanding their mind and deciding, okay, I want to take a module in blockchains and crypto assets, or something like that.
Thomas Ormerod:
That’s interesting that the white paper that came out the day before yesterday, the government is moving us towards this kind of cafeteria style of learning, which has strengths and weaknesses. It or at least it has threats. It has huge threats to the model we have now, but it also has opportunities for people being able to equip themselves with the skills that will get them jobs.
I think even at a finer grain than that, one of the challenges for us in academia is to work out what skills we want people to have. In psychology, the principal skill that we’ve thought people should be able to have is to take large amounts of information, crystallise it, pull out a point, critique it, and then design something on the basis of that. Write stuff, to put it simply. Nowadays you could argue the skill we should be sending our students away with at least one of the skills is editing. That, yes, you’re going to get the stuff out from ChatGPT. Your job is to edit that, to do the job you want it to do because it won’t be perfect. It won’t be accurate. It won’t be complete. We should have a course on editing.
Wendy Garnham:
I’m just thinking. I went to a conference in the summer where they were talking about how they saw the future of generative AI and one of their arguments was that eventually it will collapse in on itself because you’ll be entering information into it to feed that algorithm that it’s already given you, so it just it becomes less of a useful tool. But I’m just interested to hear your views, both of you, on that.
Thomas Ormerod:
There is a fix for that, which is that you would have ChatGPT 6, that is self-reflective. I already know that. I’ve seen how this gets used. In a sense, putting confidence levels on its information, and using those as part of its updating, not just updating on the basis that here’s another mention. I think that is a genuine problem with our current understanding of the technologies. But once it becomes a problem, I think it could be fixed. Don’t you?
Wendy Garnham:
Looking five to ten years ahead, what psychological or behavioural changes might we expect from the widespread use of generative AI in everyday life? Tom.
Thomas Ormerod:
Well, optimistically, we will see people feeling more empowered because they know that they have a resource that can solve a lot of their everyday problems and that they can do bigger things, better things, that the technologies will have evolved in a way that makes them much more interactive in ways that the technology can start helping you to construct your ideas, construct your arguments, deal with your problems. That’s the optimistic end. The less optimistic side will be behaviourally that we begin to shut down because we lose confidence that we have anything to offer. And we do less because other things can do more than we think we can do.
I’ve just finished a project funded by the Educational Enhancement initiative in the University which wasn’t looking at generative AI, it was looking at plagiarism and what it is that encourages people to plagiarise. And we did an intervention where we gave people essays to mark, half of which had been plagiarised, half of which hadn’t. And we found quite quickly that people gave the plagiarised ones better marks because they looked better. The text was better. And then we showed them what we would mark them as, and how the plagiarised ones did really badly, and there was an immediate effect of, but we measured the Turnitin scores, and they went from like an average of 15% across the cohort to 2%. And I think that there is this issue that people will lose confidence in their own ability to generate good material. And as academics, we must counter that. And we say, do you know what? They put an elephant in the room when you don’t want one. You’re writing, even if it’s colloquial, even if it’s full of spelling mistakes or a few naughty words, we like that better. We prefer you to do that and think for yourself and show that you thought for yourself than you get your grammar right, like what not what I’m doing now.
Carol Alexander:
I mean, I just think that it’s going to act as a sort of magnifier of inequality. There’ll be those that don’t even use it at all, and there’ll be those that just give up because they use it but they don’t feel that they can add anything by using it. And then the few, the elite who managed to increase their productivity so that what used to be done in a day is now done in an hour, are the ones that are going to thrive in every way. So, yeah, inequality is just getting worse.
Heather Taylor:
And if I watch your YouTube video, I can become one of those elite.
Carol Alexander:
That’s the idea.
Thomas Ormerod:
The challenge there that I’ll set you, once you’ve watched that and given that you’re educators, is if you’ve been in prison for fifteen years, you don’t really even know what Microsoft Office is. You don’t know what email is. You don’t know what the Internet is. How will we use generative AI to help people rehabilitate themselves into society when they’ve got that huge gulf in their knowledge? I say this because I’m on the parole board and I see these people coming up once a week when I’m in a parole panel. And they’re fascinated by our use of computers. We say to them, we’ll be looking at two screens. And one of them says to me, what’s a screen? What’s a keyboard? You know, so these people have been in prison for, like, years and they’ve gone in from the age of sixteen. And they’re, like, forty-six, thirty years in prison, and they don’t know anything about this. It’d be really interesting as a kind of difficult problem to solve. How can you use generative AI to get somebody back into the community After being out of it for so long? Don’t know the answer, throw it out there.
Heather Taylor:
It’s a great question.
Wendy Garnham:
Yeah. How do you start with that one?
Heather Taylor:
I would like to thank our guests, Carol and Tom.
Carol Alexander:
Thank you very much.
Heather Taylor:
And thanks for listening. Goodbye.
This has been the Learning Matters podcast from the University of Sussex created by Sarah Watson, Wendy Garnham, and Heather Taylor, and produced by Simon Overton. For more episodes as well as articles, blogs, case studies, and infographics, please visit: Podcast | Learning Matters











