I’m taking part in 2 sessions at ALT-C this year and whilst they might at first glance look totally different, they are in fact underpinned by the same critical thinking and ethical approaches that guide a lot of my (and our) work at Edinburgh.
We think carefully about learning technologies. We think about the ways in which we use them and the ways in which our University community experience them. We think about where technologies comes from because we know they aren’t neutral. Always we look for ways for learning technologies to enhance the role of the teacher or the experience of the student.
Who knows where the audio track goes?
The first session I’m doing is a Gasta session and in 5 minutes flat I’m going to try explain our ethical student-led subtitling pilot. If I make a total dog of it, there will also be a poster on display.
When we started exploring how to increase the amount of media we have that is subtitled we learned much more about how automated speech to text technology works, or indeed, doesn’t. We learned that human intervention is still integral to the process of getting to properly accurate transcription, and that with the wide range of accents and specialist terminology, this is probably more true in academia than in other places. We also learned that there is a very large gig-economy in this area. The irony of improving the accessibility of media content in an elite western University by potentially using precarious off-shore labour was not lost on us. So we decided to run a pilot scheme to understand what such a service would look like if it were run within the University and provided flexible employment for our students (guaranteed hours and at a living wage btw). I’m going to talk about what we did, what we learned, and what’s next.
I wrote a little about this on this blog as it shaped up too:
Supporting large student cohorts with timely and personalised feedback
The second session I’m doing is a super-speedy workshop to take participants through a hands-on exercise in using OnTask; an open source learning analytics system for delivering personalised feedback to large groups of students. I’m going to talk about learning analytics that is neither predictive nor a dashboard, and that has algorithmic transparency and promotes the agency of the teacher.
What I particularly like about this tool is that it solves a practical problem: How can teachers provide coaching feedback to students in very large courses? Maybe in a course of up to 30 it would be possible to write an email to each student, and probably one would cut and paste a few key phrases between emails to save time, but beyond those numbers? And of course it’s in big course cohorts that students tend to feel most connected and distant from the teacher, so the need is arguably more acute.
What I also like about OnTask is that it allows the teacher to operate at scale, rather than negating the teacher and having the tech take over. Let me try explain…
Using OnTask begins with looking at the specific learning design of a course and deciding where coaching feedback would be of most benefit to students – with the overall aim of helping students stay “on task”. The sweet spot for feedback is usually once students have done enough work to have some understanding (and have generated some data), but not too close to any assessment that it’s too late for them to take corrective action. This conversation is usually between learning technologists and academic colleagues. Learning technologists bring knowledge of the data available and what the tool can do, and academic colleagues bring knowledge about how the course is taught and what feedback is appropriate or useful.
Once it’s been decided where feedback is useful and what data will identify engagement with activities to that point, teachers write short “snippets’ of feedback text, associated with simple rules based on that data. For example, if a student activity in the course is to post on a discussion forum and leave a comment for a peer, then teachers can write feedback for each of the following scenarios:
- No posting, no comments
Could provide some moral support, link to advice on how to write your first post, and reinforce routes to general student support.
- Posted, no comments
Could provide positive feedback and gently emphasise importance of commenting for peer learning.
- Posted, and commented
Could provide positive feedback and thanks for taking the time to leave a peer comment, gently emphasising the importance of peer learning.
- Posted loads and commented loads
Could provide positive feedback, reassurance that they don’t need to go too far over and above, but that it’s good to be engaged.
You can imagine if there were also some specific course materials to engage with, and maybe a quiz, teachers could write rules and actionable feedback for quite a few different scenarios. All of these rules and snippets of feedback are them combined with a data extract of the students activities, and basically a fancy mail-merge takes place which creates a personalised email for each student.
It’s doesn’t solve all problems of course – it isn’t possible to tell anything about the quality of a discussion forum post using this approach – but teachers can dip into the course itself as they normally would and if necessary also include some generic messages in the email about the overall progress of the course or the cohort.
It does require a fair bit of work up front to identify the feedback points and design the feedback text, but it doesn’t seem wrong to me to spend time thinking about and writing good feedback for students. It also has to be designed to fit the specific learning design of the course, so the ability to rapidly scale up use across an institution is limited. Well, I also think it’s valuable to spend time thinking about how the design of a course influences student behaviour. And of course once this work is done, it can be used over and again for each delivery of the course for as long as the learning design remains the same.