AI at Sussex: Library Conference Workshop

By Sean Goddard and Daisy Phipps

This was a well-paced and interesting led workshop. Using a mixture of presentation styles, including small table group work and discussions, PowerPoint presentations, and longer q and a’s with the workshop leads (George Robinson and Sam Hemsley). Something for everybody.

The workshop started with how AI was being used at Sussex. The consensus was that students were already engaging with AI. The workshop was considering Generative AI, (shortened to Gen-AI) which is being used in Teaching and Learning.

Students sitting in front of the Library on Library square
Students sitting in front of the Library on Library square

Here came the first question: What was the percentage of adults regularly using AI in the UK?
The answer surprised me, 85%.
AI is everywhere: students are using it; employers expect students who are at/left university to use it. We use it for some many things: translations in Google for example, students use it to check spelling/grammar/punctuation perhaps Sean should start using this in his emails], science students use it in equations, or with data sets. (Sean knows Tim Graves uses it to).

Drilling down, George and Sam, suggest students use it in these forms:

  • 53% used to help prep for assessment
  • 36% used an AI private Tutor
  • 13% used to generate text for assessment
  • 5% of used unedited generated text for assessment
  • 35% don’t know how often it produces made up facts (Hallucinations)
  • 66% consider OK to use for explaining concepts, suggesting research ideas (54%) and summarising articles (53%)
  • 73% expect to use AI after they finish their studies

Some new terminology was presented during this workshop:
Hallucination was the first one, this is when AI make up a fact from its known ’fact-store’. Basically 2+2=5 syndrome.
Walled Garden this is important. If you use Microsoft Co-pilot at Sussex, you can check a box so that your work/data set/upload is only contained within the Sussex remit, and not available to other researchers. Secure. Well, it might be.

Moving on. The use of AI at Sussex may disrupt the hierarchy and balance of the student/lecturer relationship. AI could be used for assessment and exam marking and perhaps remove examiner bias. However, bias could have been built into the AI when it was trained. For example, it could be the mid-Atlantic white view rather than a continental European or Global South view.

A common theme did appear here: Garbage in, Garbage out.
Another theme that was considered was how to spot AI. Was it possible to identify when AI had been used? Many times, yes. When asked questions in a particular style, a similar tone, work expressions, wrong facts, and other similar factors could be identified. Could they be readily identified by a human? Sometimes. However, if used in assessment essays, they could go unidentified, even by Turnitin…

Two students sat outside the Library
Photograph by Morten Watkins

The emphasis then turned to the Library and how it could support the use of GenAI.
We should work with them, and not for them. Comment was made that when Google first became available the library stance was to ignore it and not trust it. Reflection from others who worked in the library at that time suggest that this was a stupid stance, and like all research tools AI should be used, but treated it like any other research tool, check the results against other research at least twice. Therefore, when advising to students and others:

  • Don’t tell them not to use it – Destigmatisation
  • Can’t be only the library. Part of a wider change in the University. Level of use with AI is allowed and decided at the module level.
    Why these limitations?
    Is the information foundational? Do you really need to know this information and do the thinking yourself?
  • Are you using the correct AI.    
  • Research rabbit – AI powered citation research tool
  • Show what it claims to do, show how it gets the information, and then show the pros and cons and limitations of what it can do.
    Has major biases – showing the biases
  • Showing a case where it does a good job and where it doesn’t add value
  • Giving them the tools to check the information themselves, and then provide better alternatives
  • Make clear that AI makes stuff up

Luckily, the most ethical way of using the AI tool is normally the most practical and useful way of using the tool.

Time for another question.
We were shown an AI generated image and asked how much energy did this take to produce?
Answer, a lot: the same amount to recharge a dead phone to full charge.

Power plants emitting smoke into the atmosphere
Photo by Ella Ivanescu on Unsplash

AI and ethics?
It is biased. (garbage in – garbage out).As explained above, it does take huge amounts of data and energy.
Discussion around copyright concerns and other related issues. Is AI copyright compliant? Who owns an image or text made by AI? It’s all under debate, and not helped by individual nationalistic differences in copyright law.  There is no such thing as a global copyright law.

While it won’t make an image based on a [Disney] character, it could produce something very close to, or even indeterminately the same from a description.

Last part
This was a round up of where the library, university and the world are at.
What does the library want academics to know?

  • Get with the times
  • We’re teaching how to use it; we’re not recommending people do use it.
    Are the students fully informed and making informed decisions about AI
    AI is in certain software the students already use
  • Awareness of the current regulations, and there are other forces that will make your decisions for you.
    Read Chat-GPT and AI software terms and conditions what have you agreed too.
  • Put guidelines in their modules for using AI and soon!
  • Is there an AI split. Some students and academics are using AI regularly. Others are not. This may be subject based, age of the student based, type of student (UG, PG, International).
  • Students need to reflect on when and where to use Gen-AI
  • Work with students to find out what the students are using Gen-AI for
Laptop showing an image of ChatGPT front page
Photo by Emiliano Vittoriosi on Unsplash

Principles on the use of AI in education (Russell Group, July 2023):
Universities will support students and staff to become AI-literate. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access. Universities will ensure academic rigour and integrity is upheld. Universities will work collaboratively to share best practice as the technology and its application in education evolves.

Software we already have is:

  • Microsoft Copilot – walled garden. Not the best. Doesn’t track your conversations.
  • Adobe firefly – for generating AI images (trained on their legally owned images)
  • Padlet – ai image generator built in

Marking:
Sussex Uni and AI assisted marking tools. Graide software for use when marking. Training a walled garden AI. Keath software developed by the Uni of Surrey for written assessments, also needs training up.

  • Faster marking
  • Free up staff time – Lets staff focus on student progress and different assessment types
  • Improve consistence
  • Locks people into certain assessment types

Plato: new software developed by students. Perfect for helping make notes during lectures.
Jamworks: takes notes from meetings and summarises them.

AI Permission statements available for use in the module assessment guidelines for academic integrity

Tagged with: , , , , , , ,
Posted in The Library

Leave a Reply

Your email address will not be published. Required fields are marked *

*