Technology will not save us: Learning from disability history

Katharine Terrell, Academic and Digital Development

Introduction

Not a day that goes by without another article, guideline or policy in relation to the use of Generative Artificial Intelligence (GenAI) in Higher Education (HE). Some of these seem to be doom-and-gloom predictions of ever-increasing plagiarism; others are rosier in their optimism of how to embed these technologies into teaching and learning practice; still others are concerned with ethics, from the thorny issue of intellectual ownership to GenAI’s carbon footprint. What has received less attention are the wider ethical concerns around GenAI in teaching and learning activities for disabled students. There have been some discussions about the usefulness of Gen AI for disabled people in general, and disabled students more specifically. However, what appears to be missing is a deeper consideration of what GenAI means for disabled students. This is not just about making GenAI accessible to students with different learning needs; in fact, it goes far beyond this, touching on issues of what behaviour we expect from students, and even what makes us human. Every new form of technology both holds potential and provides good reason for disabled people to be wary. To truly engage with these issues on a deeper level, we need to take a step back and consider some of the history of disabled people and technology.

Disabled people and the promise of technology

Disability Studies – the interdisciplinary study of disability and disabled people in society – tells us that disabled people have been subject to the “medical model” of disability. This model suggests that disabled people are broken and need to be fixed, cured or treated. However, in the 1970s, disabled people started to develop a new way to think about disability. Instead of seeing themselves as disabled by their physical or mental “impairments”, activists and scholars instead stated that they were disabled by barriers in society (Oliver, 2013). The logical conclusion to this, of course, is that the focus of efforts to emancipate disabled people should be on removing these barriers, not on fixing individual bodies. This simple model has reframed academic discussion of disability forever.

So, how does this relate to technology? Countless new technologies have promised to help, support, fix, cure or treat disabled people – with mixed results. In the seminal Disabling barriers—Enabling environments, French expresses ambivalence towards technology, arguing from her own personal experience that:

technological aids are a mixed blessing. I am writing this article on a word-processor which enlarges the print on the screen. It is a marvellous machine and I would not be without it. Yet aids can become a burden, too, because other people have such faith in technology that they believe the disabled person is managing perfectly well and requires no assistance (French, 1993, p. 46).

She questions the focus on helping disabled people become more independent, arguing that we are all –disabled or not – dependent on each other. If we are not careful, relying too much on the promise of new technologies allows us to ignore what disabled people say they need and pretend that technology can remove all differences. French goes on to cite Wolff, writing in 1986 about the imagined future disabled student in the distant future of 2001 (!) who argued that technology can never be replaced by real human relationships. In the pressure of our day-to-day jobs it can feel tempting to believe that every barrier to learning can be removed automatically, at the touch of a button, without needing to think more deeply about the issues. We can therefore see some of the complex issues around how technology is viewed by disabled people: it can be a help, it can be a hindrance, but ultimately it will not be a cure-all. And yet, in the second edition of Disabling barriers—Enabling environments Sheldon (2013) argues, ‘Every new technological breakthrough is inevitably hailed as a saviour for disabled people, as a way of minimising their ‘deficits’ and thus making them less dependent on other people’ (p. 156) We must be careful not to fall into this trap with GenAI.

At this point, it is helpful to consider how technology is used by students in education already, and especially disabled students. Technology that was once niche but now mainstream includes: video calls (once only available to via expensive equipment, now ubiquitous); speech-to-text and text-to-speech tools (once niche and unreliable pieces of software, now built into the most mainstream computer programmes); and closed captions/subtitles (once the preserve of a few TV channels, now available on almost all videos online, albeit with varying quality). All these technologies make the lives of students easier on a day-to-day basis. They remove barriers to learning, so that students can focus on the important information. Yet, at one point, each of these technologies would have been considered a specialist product. Just a decade ago, d/Deaf and disabled students would have struggled without some of this technological support, either because it was unreliable or too expensive. These technologies that were once seen as ‘assistive’ technology are now simply mainstream.

So, what does this mean for Gen AI? Like other forms of technology, it can straddle the line between assistive technology and mainstream technology. For example, if students use it as a ‘study buddy’ is this assistive technology? Perhaps it does not matter how we label it; but it is important to recognise that disabled and non-disabled students alike find Gen AI useful for removing certain barriers to their learning. For example, one might use AI to summarise complex texts or to proof-read (British Dyslexia Association, 2023). However, rather alarmingly, one literature review found that Gen AI was perceived as being a ‘saviour’ to lift the ‘burden’ of disabled young people off educators (Rice & Dunn, 2023). Technology alone will not result in full inclusion of disabled students. If we are aiming for true inclusion, we need to consider the underlying issues that make Gen AI in education so complex.

Towards a cripistemology of Gen AI

However, among some teachers in HE, there is pessimism and worry about what GenAI might mean for the future of education. There are worries that it might turn students into the opposite of active learners: passive, non-thinking automatons, plagiarising from the unattributed thoughts of millions, with no understanding of the biases and prejudices built into the system. Perhaps like when the Internet first arrived, AI gives students a promise: ask any question, and you will get a quick, friendly answer, without needing to speak to a human. But of course, having an instant answer to any question does not solve every problem. The only way to help students to truly engage in their learning is to have honest, ongoing, sometimes difficult conversations about the nature of teaching, learning, and knowledge. Cautiously approaching AI in our teaching and understanding its pros and cons is surely the only way we can ensure that real human connections with our students are prioritized in relation to Gen AI.

Cripistemology (a portmanteau of ‘crip’ and ‘epistemology’) is a way of ‘knowing and unknowing disability, making and unmaking disability epistemologies’ and rejecting certainty and normalcy (p. 130). If we start thinking cripistemologically, we can start to question the bases and assumptions that so much GenAI discussion skirts over. For example, what do we lose when the data that GenAI uses is based only on written English, and not using videos of sign language? There are steps being taken to address this (GoSignAI, 2023) but sign language data is lagging behind written data by a huge margin. This is a problem for a number of reasons. Firstly, it means that students who communicate through sign language are on an unequal footing, being forced to primarily interact with GenAI through written language. But beyond that, the lack of sign language data both demonstrates and exacerbates the primacy of written language in academia. It reinforces the status quo: that knowledge is held in written form and anything outside of this is second rate. This is bad for all of humanity, as it fails to recognise the knowledge and wisdom beyond the written word.

Gen AI may be another example in a long history of ambivalent feelings from disabled people towards technological advances in education. On the one hand, AI tools have many uses that some disabled students might find particularly valuable. Yet, for disabled people, technology has often had a sinister side: expectations that we should aim for high tech ‘cures’ instead of creating a more just society with equitable educational systems, for example. Part of creating equitable educational systems is listening to and understanding disabled people’s own collective knowledge and wisdom. As educators, we can take this moment to step back and think about what is important in our learning and teaching practice. This goes beyond guidance or policies on using GenAI in the classroom. Instead, we should be asking ourselves what knowledge we value and how we work with disabled students as intellectual partners, not problems to be solved.

Acknowledgements

With thanks to the Active Learning Network organizers and attendees of the Active Learning Conference 2024, where an earlier version of this essay was presented.

References

British Dyslexia Association. (2024). Could ChatGPT change your life? https://www.bdadyslexia.org.uk/news/could-chatgpt-change-your-life

French, S. (1993). What’s so great about independence? In J. Swain, V. Finkelstein, S. French & M. Oliver (Eds.), Disabling barriers—Enabling environments (pp. 44–48). Open University Press.

GoSignAI. (2023, May 12). We’re not collecting enough sign language data. [Video]. YouTube. https://www.youtube.com/watch?v=ih_UX4VIrVU

Johnson, M., McRuer, R. (2014). Cripistemologies: Introduction. Journal of Literary & Cultural Disability Studies, 8(2), 127–147. https://doi.org/10.3828/jlcds.2014.12

Oliver, M. (2013). The social model of disability: thirty years on. Disability & Society28(7), 1024–1026. https://doi.org/10.1080/09687599.2013.818773

Rice, M. F., & Dunn, S. (2023). The Use of Artificial Intelligence with Students with Identified Disabilities: A Systematic Review with Critique. Computers in the Schools40(4), 370–390. https://doi.org/10.1080/07380569.2023.2244935

Sheldon, A. (2004) Changing technology. In J. Swain, S. French, C. Barnes & C. Thomas (Eds.), Disabling barriers—Enabling environments (pp. 155–160). Sage.

Leave a comment