If you feel safe in the area you’re working in, you’re not working in the right area

We got the horrible news a few days ago that the BC Court of Appeal has decided to dismiss Ian Linkletter’s appeal of his anti-SLAPP application, and Proctorio will be able to move ahead with suing him. If you haven’t been following this case, then you can play catch up using this handy summary with analysis from Cory Doctorow. It’s entirely partial and completely biased because many of us have strong feelings about this case. Beyond the personal impact on a valued and much loved member of our learning technology community, many of us see it as an existential threat to the jobs we are employed to do. Terry Greene perfectly sums up what a lot of us feel right now:

I fundamentally believe that unless we are able to understand, evaluate, and critique the educational technologies that we use, we cannot make strong claims about the quality of digital education. I believe this because I believe (as many do, and as much research validates) that educational technology is not simply a tool, but rather is intimately bound together with teaching and learning, and that each influences the other. It’s simply not possible to look at one without considering the other. Beyond our jobs as learning technologists, we’ve also seen a chilling effect on academic research as a result of this court action by Proctorio. This should frighten us all.

And what we see when we look at AI-enabled proctoring, such as Proctorio, is evidence of significant harm to students, largely caused by invasive surveillance practices combined with biased applications of AI. In the absence of clear information about the specfic technical workings of these products (and keeping this information hidden is a significant part of the court case against Ian, amongst others), there is enough anecdotal evidence to suggest that the impact on students and their learning is sufficiently bad to warrant not using these technologies.

Expressing this kind of view, given the amounts of money at stake here does make one think that learning technology might be an increasingly dangerous field to work in though, especially if you’re the kind of learning technologist who is curious, critical, and research aware. There’s a whole new personal insurance market opening up here if someone is looking for an opportunity…

At the same time as this, I’ve been trying to onboard myself into a new role on the Board of the Open Source Initiative. New colleagues there are heavily engaged with policy work, carefully analysing the potential impacts of proposed EU and US law and regulations in cyber security and AI. I read this short blog post from Executive Director Stefano Maffulli who was attending a conference on proposed AI regulation. I couldn’t help but see at macro-scale the kinds of challenges we are grappling with right now in higher education.

That is that there are large vested interests in minimising transparency about how products work, despite all the fine words to the contrary:

“…her [Rebecca Bauer-Kahan] proposed legislation is strongly opposed by the whole of Silicon Valley. Even if the various Zuckerberg, Pichai, Musk, Altman say publicly that the industry wants AI to be regulated, when they go to Sacramento they sing another song.”

And that we lack adequate tools and understanding today to address issues when they arise:

“The lack of tools for governance was highlighted by Gross on multiple occasions when talking about the AI Act, somewhat revealing how ambitious the legislation is. When asked, he used sentences like “we hope there will be tools for governments to oversee the quality of AI” or “There should be systems to certify…” which don’t send any sign of confidence that the infrastructure necessary for the AI Act will be anywhere near ready by the two years timeframe set by the European Commission.”

As I watch AI technologies emerge and rapidly become incorporated into products or practices within education, I really worry about the kinds of ongoing “collateral damage” we are going to see. And what I’m not seeing is a substantial conversation about that. In one sense it’s natural for us to want to play and experiment and understand – it’s what Universities do – but what I find deeply troubling is the enormous push to do this work within our every day operations, often via the purchase of edtech, because (as I’ve said before) this is an unregulated area of activity, unlike our formal research, and I think that the academy is taking a heck of a risk by being the testing playground for a lot of this stuff.

 

** For anyone who cares, the title of this post is from a David Bowie quote. The only thoughtleader I care to mention.

Leave a Reply

Your email address will not be published. Required fields are marked *