A stable concept of GenAI literacy could be harmful

I was offered the opportunity at the start of this year to write a short position statement on the subject of generative AI literacy for a potential OER collection of articles, and penned the following 500 words in response. Since then I’ve been back down the AI rabbit hole on a number of different fronts (Open Source AI Definition; teaching a course on critical inquiry; pondering a possible conference session; writing a research proposal; thinking ahead to a talk later in the year) and in all cases bar one, I find myself returning back to the core sense of my thinking below.

How do you define GenAI literacy?

A neat definition of GenAI literacy will continue to be a moving target. The concept of a stable or complete literacy is potentially a marker of an undesirable future in which it is used as a closure mechanism against sustained critical inquiry and alternate possible futures.  Defining the boundaries of a space of rapidly evolving technology that is over-hyped (Nemorin et al., 2023), poorly understood, and increasingly politicised (Robins-Early, 2023) is hard, and we should not shy away from grappling with its complexity. Reducing the concept of GenAI literacy to something like a curriculum risks an alignment with positivist framings that begin with the belief that this technology should be used (Knox, 2023).

Whilst much of the technical sophistication of GenAI is the product of statistical calculations at scale, the hallucination of sentience remains compelling for many. However, issues of bias and various other harms are increasingly being recognised (Selwyn, 2024), and solutions remain nascent (Schwartz et al., 2022). The implications of GenAI are also highly contextual and the variety of potential interactions across our civic and professional lives, of which education systems are one part, must mean that the business of GenAI literacy is a shared societal endeavour.

This does not mean that previous work in digital literacies is not relevant or helpful though. Work by doteveryone to define “digital understanding” is an example of the kind of broad framing that could be useful when thinking about the scope of GenAI literacy.

“Digital understanding is not about being able to code, it’s about being able to cope. It is about adapting to, questioning and shaping the way technologies are changing the world.” (Miller et al., 2018)

And, what can be done to foster the GenAI literacy of education professionals and students? 

A vital aspect of the dotveryone definition is that it includes a call to action and a recognition of agency in “shaping the way technologies are changing the world”. Within engagement with the socio-technical nature and inherent complexity of GenAI is the possibility of exercising that agency, but it requires that critical enquiry and debate are not seen as anti-progress in the face of an inevitable future in which we risk some ill-specified loss in being “left behind”.

If the work of fostering GenAI literacy is one of exploring a space of complexity, then the academy is already well positioned to do this work (Knox, 2023). This work will best be achieved through research, discussion, pilots, collaborations, sharing findings, practices, and resources. We will evaluate and codify what we have learned from this work, just as we always have done.

Universities are also collections of labour (Connell, 2022) and if the implications of GenAI are highly contextual then we need to consider the full breadth of institutional engagement beyond our learning, teaching and research activities. For example, should our legal experts be helping to inform emergent regulatory regimes? Should our technology and procurement specialists be developing new decision-making frameworks that keep us aligned with existing pledges to accessibility, labour rights, and sustainability targets? What are the needs and opportunities of our whole community?

References

Connell, R. (2022). The good university: What universities actually do and why it’s time for radical change. Bloomsbury Academic.

Knox, J. (2023). (Re)politicising data-driven education: From ethical principles to radical participation. Learning, Media and Technology, 48(2), 200–212. https://doi.org/10.1080/17439884.2022.2158466

Miller, C., Coldicutt, R., & Kitcher, H. (2018). People, Power and Technology: The 2018 Digital Understanding Report. Doteveryone.

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38–51. https://doi.org/10.1080/17439884.2022.2095568

Robins-Early, N. (2023, August 21). ‘Very wonderful, very toxic’: How AI became the culture war’s new frontier. The Guardian. https://www.theguardian.com/us-news/2023/aug/21/artificial-intelligence-culture-war-woke-far-right

Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (NIST SP 1270; p. NIST SP 1270). National Institute of Standards and Technology (U.S.). https://doi.org/10.6028/NIST.SP.1270

Selwyn, N. (2024). On the Limits of Artificial Intelligence (AI) in Education. Nordisk Tidsskrift for Pedagogikk Og Kritikk, 10(1), Article 1. https://doi.org/10.23865/ntpk.v10.6062

Leave a Reply

Your email address will not be published. Required fields are marked *