­

When does assistive technology become prohibitive technology?

By Dan Axson (Learning Technologies Manager)

In May 2024, I wrote a post titled ‘What if our students with disabilities didn’t have to jump through hoops to have an equitable experience.’ You’ll be forgiven for not having read it, it never saw the light of day (hence the very un-catchy title). Amongst the noise of a never-ending flurry of AI related content, I never quite got it finished. In it I had a meandering wander through my thoughts and experience about the onerous admin, lack of agency and impacts on cognitive load a student with disabilities may experience. Naturally given we’re in the ‘AI is mentioned in all the things’ era, the topic came up. It made the case that university policies and guidance on the use of generative AI technology for teaching, learning and assessment may inadvertently put students with Disabled Students Allowance funded software at odds with academic integrity policies.

Fast forward ten months and the topic, with little shock to anyone, is not only still very relevant but becoming more so. Then an email thread shared with me recently resurrected the ideas. Discussed in the email was the complexities of approving software in a space where the software companies themselves can update tools to add AI ‘features’ without notice, students understanding or awareness of institutional policies can’t fairly be relied on and the DSA assessor’s ability to find them is an us problem not a them problem. So here is that post, dusted off and made fit for 2025.

 Robot holding out hand in supportive manner. Hand fills frame. 
Photo by Possessed Photography on Unsplash 

From dependent to independent

As anyone in the Learning Technology game will tell you, conversations are largely dominated by generative AI. Recently, we have been testing a tool called Jamworks. In short, the tool generates a transcript of a recording, for example a recorded lecture, then uses said transcript for a number of very well thought out AI tools such as summaries, key points, flashcards etc. As part of our conversations around the use of such tools in the classroom, was how this would be hugely valuable to someone who relies on a notetaker. These notes are generated almost immediately, they don’t require waiting on the notetaker to send them on and unlike with the human notetaker, the student can ask questions of the notes, whereas the human notetaker may not be a subject expert. In other words, the technology enables much more autonomy and independence than existing approaches with human notetakers. This is of course just one example, there are many.

We know assistive technologies provided through assessment are essential for students. However, in an increasingly complex technology environment, how do we ensure the technology doesn’t put students at odds with academic integrity?


It’s a trap

As noted, advances in technology have a lot of opportunity to make things better for people with disabilities, we ignore and suppress them at our peril. A concern though is how easy it might be to indirectly create barriers through the wording and enforcement of academic integrity policies or directly through mitigations. For example, it would be possible for someone relying on a tool such as Grammarly to fall foul of the proof-reading policy. Further, we’ve seen an increase in inaccessible content like photos of text be uploaded to quizzes to prevent AI tools copying text, overlooking the fact that optical character recognition (OCR) or the ability for software to read text in an image, is getting the same AI love as everything else. This is simply not an acceptable approach, but it highlights the immediate response HE has taken to advances in technology and that is one of pulling up the drawbridge and bolstering defences by compromising on accessibility and if you’re caught on the other side, you’re kind of on your own… for now.

However, for the most part our students do not fall foul of policy in using these tools routinely for teaching and learning. Our Skills Hub guidance on Grammarly:

‘Can I use Grammarly? It’s fine to use Grammarly to improve basic grammar, spelling and punctuation errors. However, you should not use Grammarly’s generative AI features to meaningfully change your work. The same restrictions apply to Grammarly as they do to a proofreader, and you should think carefully about which changes you accept, as it remains your responsibility to ensure the accuracy of your work.’

Sam Hemsley, Academic Developer follows up:

‘As above, just using AI in an assistive capacity isn’t contravening any rules/isn’t academic misconduct (for students with DSA provided assistive tech and those using genAI tools in general). There is nothing, inherently, wrong with AI being built into assistive tech. As with Grammarly, I suspect the onus is currently on the student to ensure use of such tools doesn’t contravene academic integrity policy and/or explicit permissions from module convenors, should they allow use of AI in the creation of assessed submissions.’


Who’s problem is it anyway?

Given the demands already placed on this group of students, having them need to police the software they get recommended is perhaps a stretch too far. As Learning Technologist Helen Morley points out.

‘It wouldn’t be appropriate to place the onus on the students to police the assistive technology (AT) they have been given […]. As for AT they’ve sourced/bought themselves, I’d be wary of the providers advising them of any changes in a way which is clear and accessible enough. Of course, students use AT for a variety of reasons but for those who already managing pressure on their executive function, energy levels, stress levels etc. this is an unreasonable expectation and turns something from assistive to prohibitive.’

I have to agree and whilst we need to look at reducing this extraneous demand, I strongly believe one way in which we can do this is by (yes you guessed it) building the digital capabilities of our staff and student body. Being able to have an informed discussion with your assessor doesn’t place the onus on you but enables you to feel more at ease with the solutions provided. Conversely, teaching staff will be more familiar with the tools, the capabilities, and the needs they are addressing and between them all, know where the boundaries are and how to avoid crossing them. Sounds too good to be true? Maybe, maybe not, but it has to be worth a try.

I don’t think it’s a stretch to say improvement in skills equates to an improvement in equity. Yes, I can hear you saying what about curriculum design, and you’re right, it’s a key element. Frameworks such as Universal Design for Learning and the university’s own curriculum review will address some of the challenges we’re seeing, but my view is that to engage meaningfully with the ‘threat or opportunity’ of advances in technology it’s fundamentally a skills question.

Skills led equity

There are number of barriers to mass adoption of Generative AI associated skills development: resources, content, training, time, willing, strategic direction, language being just a few. So where do you start with addressing these challenges?

I suggest language is the starting point, a common language of digital skills, what’s the difference between a chatbot and an LLM, how do I talk to my students about ‘hallucinations’, what about relying on summarised text. Does everyone use the same terms in the same way? When a DSA assessor is looking for institutional policy on use of Generative AI tools, wouldn’t it be easier if the terminology was the same across the institutions? A reach I know but makes the point.

So, what do we do about it?

My call to action here is to think explicitly about the digital skills you and your students have and where you might like to make improvements in confidence to ensure a fair, equitable and barrier free approach to the use of assistive technology for those that need it. Start by talking to your students who are using this stuff and find out how they are using it. It may then follow that when a student is assessed they are more able to understand the implications of certain technologies on your teaching and assessment.

Then, find opportunities to engage with the conversation, whether it be the internal AI summit (deadline 28th March 2025 for contribution), the AI in Education Community of Practice, or other development opportunities on these topics. We’re all learning as we go on this stuff, come join us.

Tagged with:
Posted in Active learning, Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *

*

About our blog

We are the Educational Enhancement team at the University of Sussex. We publish posts each fortnight about the use of technology to support teaching and learning. Read more about us.

Subscribe to the Blog

Enter your email address to receive notifications of new posts by email.

Archive