A grey cat on a fleece rug, yawning.

Some ill-formed thoughts about AI, robot colleagues, resistance, refusal.

Unless I toss all my devices in the bin and take up cat-sitting as a profession, I cannot avoid the internet stooshie about AI in education, in this case hand-wringing about ChatGPT and plagiarism. Can we seriously not think of more interesting conversations to have than more ways in which to hate on students and amp up the edtech arms race?

I read Vitomir Kovanovic’s recent piece in the Conversation and have been following John Warner’s excellent articles for a couple of weeks now (amongst a number of things). Both are spot on in terms of education being far too focused on the product of student learning rather than the process, and the suggested responses in terms of assessment practices are absolutely right. Many people I respect are providing excellent summaries of the ways in which a pretty stupid AI is now presenting challenges to pretty stupid assessment practices; backing up the many decades of excellent educational research already that point to our little problem with assessment.

I’m also seeing creative conversations about how to engage with tools like ChatGPT in generative and creative ways and I can’t help but reflect back on the work of my colleagues at Edinburgh and their 2016 Manifesto for Teaching Online:

“Automation need not impoverish education:we welcome our new robot colleagues”

This could be a really exciting space, so why isn’t it?

Vitomir Kovanovic is right when he says that we will need to “…think of ways AI can be used to support teaching and learning, rather than disrupt it…” however I found it a little frustrating that the article doesn’t give any concrete examples of what that might look like, and I think that gap is problematic because it gets filled with all sorts of imagined rubbish that gets in the way of having conversations about why we would want to engage, and what it could look like.

Which leads me back to some of the other classic hits of the AI in Education social media “discourse” that have been on repeat over the last few weeks. These include:

  • “I said this 10 years ago buy my <whatever>”
  • “This is the end of the university as we know it / 10 universities in the future”
  • “This is inevitable / universities are too slow to embrace change”.

The first two of these are mostly the late-stage grift of the ageing keynote crooner, or the has-been futurologist in desperate search of relevance with a new generation. Meh, whatevs.

The third one is the one that continues to generate a reaction from me. I am so sick of hearing people call out higher education for not engaging, yet I see very little in the broad public conversations recently that has inspired my engagement. If you care about the academy and genuinely want engagement please start sharing more concrete examples of inspiring and exciting practice. I know they exist. I’ve worked with some, I’ve watched lots more conference presentations about them. Help to change the conversation, not scold.

This kind of criticism also calls out the academy for being too slow to change but doesn’t take the time to engage with why resistance and refusal exists.

Our academies have been hollowed out by a couple of decades of neoliberal thinking. In-house technical capacities have been decimated and cultures of out-sourcing are rife. If you are worried that money that could have been invested in HE is going to go to commercial entities instead, then you haven’t been watching how much money flowing into universities flows straight back out to commercial suppliers already.

Many student and teacher experiences of AI over the last few years has been AI proctoring, which is an utter dumpster fire and mostly serves as a good set of case studies of bias and poor testing of technology (yes, yes, and our shitty assessment practices). And after the last couple of years we’ve all had, let’s not be pretending that AI technologies are not as riddled with the sexism, racism, and gender-bias that we’ve completely failed to tackle in every other area of life.

I cannot be uncritical about this technology, as excited about the possibilities as I may be.

This kind of talk also obscures a rich history of successful change in universities. Institutions that have persisted for hundreds of years do so precisely because they can change. As the Conversation article above says “History has shown time and again that educational institutions can adapt to new technologies.” and Audrey Watters in her History of Teaching Machines documents the many ways in which teaching machines of various types have been a recurring trend over the better part of a century. In this sense I can buy that the adoption of AI in education probably is inevitable; but I am much less convinced of any inherent revolutionary potential because that seems to be me at the moment to be founded on some pretty essentialist thinking.

“Any educational innovation involves likely reconfigurations of power, especially in terms of who gets to decide what “teaching” and “learning” is.” (Neil Selwyn, Should Robots Replace Teachers?)

I have no doubt that many universities will embrace AI if it offers yet another way to reach for scale at the same time as further devaluing the expert workforce of the academy – and I put learning technologists and learning designers in there as well as our academic colleagues. For as long as public funding of universities is slashed, competition is encouraged, and edtech is sold as competitive advantage the adoption of bullshit technologies will thrive. And if you want to fight about that then take a look at the history of TurnItIn then come ahead for a square go.

So, whilst we continue to wring our hands about ChatGPT today (and whatever else tomorrow) and tear down the academy more generally, here’s my list of things we’re not talking about that we really should be if we want actual meaningful engagement*:

  • We’re not talking about precarious labour practices that leave colleagues feeling threatened by technology rather than excited by it.
  • We’re not talking about marginalised or low power groups who have been disproportionately harmed already by shoddy implementations of AI.
  • We’re not talking about students as consumers and the high personal costs of their education which makes us much more wary of experimentation.
  • We’re not talking about external regulators, or the differential power relationships between institutions, which make us more conservative in our approaches.
  • We’re not talking about the wholesale hollowing out of digital capabilities within the sector that limit our ability to embrace and support anything below the level of the enterprise, especially if it doesn’t come with a support contract.
  • We’re not talking about the lack of digitally knowledgeable senior leaders in institutions, despite all the talk of digital transformation.
  • We’re not talking about league tables and all the ways in which we are coerced to treat each other as competitors, rather than collaborate.

* not exhaustive and not all universities, obviously.

2 thoughts on “Some ill-formed thoughts about AI, robot colleagues, resistance, refusal.

  1. Pingback: eLearn @ UCalgary

Leave a Reply

Your email address will not be published. Required fields are marked *