So, after my last wee rant about AI, it continues. ChatGPT has eaten our collective minds. We continue to talk about the need to change assessment practices in response to AI , or enthuse about the ways in which it might help us with some of the less exciting tasks in our own work lives, and I do not disagree. But I also find myself not caring very much. These are so many reactive, proximite concerns and my ultimate concern is that we are further eroding the creativity and dignity of human labour.
Full disclosure – I am inhabiting dark places right now.
Since January I have been following a couple of Digital Detox programmes – the Middlebury DLINQ one which is on alternative digital futures; and the TRU one which is on AI and education futures and I’m finding the intersections between them generative – a little stimulating friction between human ideas and approaches. As an example, I’d recommend pairing Brenna Clarke Gray’s post from mid-January that asks questions about performative equity and whether removing the human from decision making is ever a good idea, with Bob Cole’s essay looking for examples of AI collaboration that support human flourishing.
I am appreciating very much the DLINQ focus on speculation and the power of stories. Well-made stories, refined by time and telling, can carry essential and illuminating truths.
“The more one knows fairy tales the less fantastical they appear; they can be vehicles of the grimmest realism, expressing hope against all the odds with gritted teeth.”
(Marina Warner, From the Beast to the Blonde: On Fairy Tales and Their Tellers)
I also take heart that we are out here actively trying to dream the future and change our trajectory.
“Our present reality is better described by near-future science fiction dystopias than by standard economic analysis; ours is a hot planet, with micro-drones flying over the heads of the street hawkers and rickshaw pullers, where the rich live in guarded, climate-controlled communities while the rest of us while away our time in dead-end jobs, playing video games on smartphones. We need to slip out of this timeline and into another.”
(Aaron Benanav, Automation and the Future of Work)
That ChatGPT has been able to so easily replicate so many examples of common-place writing says something about how beige that work is. If ChatGPT can write a “good” course syllabus it’s not because it’s smart, it’s because it’s been trained on a large corpus of pretty undifferentiated syllabi written by us. Likewise job descriptions, press releases, essays etc. So much bull-shit text makes for a highly effective bull-shit text generator.
“Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience.”
(Nick Cave, The Red Hand Files)
So I lament that we are excited about outsourcing our creative labours to tools like ChatGPT. I lament that we have filled our lives with so much work creating such mounds of worthless word-debris that we feel it’s burden.
I did a keynote talk for ICDE in late 2021. They asked me for a keynote on Flexibility, Accessibility, Scalability, and Innovation in Quality Assurance, and I’m not sure that they got the talk that they wanted, but they got the story I wanted to tell.
In preparation I spent time reading the region specific reports that ICDE produced about the impact of the pandemic globally on education. They will tell you exactly what you expect about distribution of resources and sector capacity globally, and I couldn’t unhear Laura Czerniewicz as I read them.
“The pandemic has made visible what should have never been ignored. Now that the impacts of inequality are clear and visible, they must never again be rendered unseen.”
(Prof Laura Czerniewicz, Letting the Light into Higher Education)
I talked about what I saw as the critical post-pandemic challenge for quality assurance in online and distance education – the need to explicitly address inequity. I talked about the need for quality assurance that supports the design of education for complex and unknown futures and education that accommodates the material realities of student’s lives. I acknowledged that this takes investment and argued that we should be focussing that investment on those farthest from justice. We exist in systems of limited capacity so we need to invest where we can do most good, because not moving the needle at all is no longer acceptable.
I talked about doing this work in an increasingly technologically mediated education system, and that we need to expand the scope of quality assurance to include the technologies we use as well as our pedagogy, because those technologies are not neutral, and come with agendas. And I talked about the challenges in designing for complex and unknown futures when forces outside the academy might be actively trying to shape our futures for their own profit.
“If there is a desire to create futures that do not reproduce the violence of the past, then an ethics of futures in education will turn itself to the task of listening to and engaging with the experiences, desires and beliefs of those who have been harmed and marginalised, exploited and oppressed”
(Keri Facer, Futures in education: Towards an ethical practice)
And now I read about how ChatGPT was built; the always-hidden human labour behind the platform. I read about precarious off-shore work, poorly paid, and utterly traumatising, all to ensure that our delicate sensibilities are not offended by the vilest things we can do to each other and the world around us.
In a recent Time article the CTO of OpenAI identifies “…questions about how you govern the use of this technology globally. How do you govern the use of AI in a way that’s aligned with human values?” as the key ethical question still to be resolved.
STILL TO BE RESOLVED.
“governing the use of this technology” is also a neat side step away from looking too closely at how this technology comes into being. It’s calling for regulation after the fact. So, when I watch universities in mostly well-off countries talk about assessment, or automating relatively uninspiring tasks, I really think we’re missing the fucking point.
“We must not look at goblin men, We must not buy their fruits …”
I’m not sure sadly that refusal is an option for us though. We can’t not engage with this stuff when it’s all around us.
But we can tell stories, and whilst they might not be the stories people want to hear, they can carry the grimmest realism, and maybe by doing this work we can slip out of this timeline and into another.
“In this light, we urge researchers and journalists to also center low-income workers’ contributions in running the engine of “AI” and to stop misleading the public with narratives of fully autonomous machines with human-like agency. These machines are built by armies of underpaid laborers around the world”