Breaking your back in the new low

We define Learning Technology as the broad range of communication, information and related technologies that can be used to support learning, teaching and assessment. Our community is made up of people who are actively involved in understanding, managing, researching, supporting or enabling learning with the use of Learning Technology. (ALT website)

This post has been rolling around in my head for a couple of weeks now. I wrote three drafts of it last week and still can’t seem to quite get it right (and sweet baba ganoush I wrote them on an iPhone, typing quotes from docs stored on a Kindle*, all of which was painful). The essence of what I thought I wanted to say is that our jobs as learning technologists are changing; but then I did a bunch of reading and decided that actually our jobs haven’t changed at all, but the digital and information literacies that we need are shifting at an exponentially increasing pace. I think this presents us with a bunch of near future challenges that we need to respond to, and which many people are, which then begs the question of why write this post at all… I feel another redraft coming on…

A second wave of change is currently taking place, whereby institutional systems – while still important to students and for general course management and communication – are ceasing to be the technologies of choice for the vast majority of learners when it comes to their personal and social study practices. Educational technologists are once more having to serve two quite different masters: the centrally managed and increasingly integrated learning environment, comprising the VLE, assessment management, e-portfolios, content repositories, and the information systems that interoperate with them; and the social, personal and immersive technologies (Web 2.0, streaming data, file sharing etc…) that are the new engines of innovation. (The positioning of educational technologists in enhancing the student experience, 2010, ALT, p22.)

I’m not sure we ever really did properly deal with this “second wave of change” and anyway those “new engines of innovationturned out to be creating a hellscape, and it wasn’t for our benefit. That aside, I think another wave is rolling in; potentially it’s happening faster in my institutionfor particular reasons, but I think there’s signs that it’s happening for others too. The use of computation and data handling is pushing into many more subject areas –  I’m not talking just about the data exhaust** of many of the technologies and platforms we use – I’m also talking about the kinds of learning activities that students are being asked to undertake across a wide variety of subjects. That changes the kinds of questions and challenges that we face as learning technologists in pretty substantial ways I think.

The issues around learning technology that we now need to be alert to require us to have a higher level of understanding about platforms, systems, code, data, and ethics. As learning technologists we listen to and learn from our academic colleagues about how they are using technology in their practice; and bring our specialist knowledge to advise on the use of various technologies in specific pedagogical contexts. We need to move beyond just thinking about learning technologies as tools (or efficiency mechanisms) and consider the generative ways in which technologies might act as co-agents alongside teachers.

It also needs to be possible for as many of our academic colleagues and our students as possible to engage meaningfully and critically with the increasing digitisation of education, and bring as diverse a set of perspectives as possible. Otherwise the risk is that their voices will be marginalised, and other interests and biases will come to the fore. Learning technologists have a vital role to play in this space I believe, but it requires institutions to invest in time, staff development and digital skills for all of us.

Signs and Portents

The kinds of learning and teaching technical infrastructures we need are changing

The idea of giving students their own LAMP stack (Domain of One’s Own) was radical enough, but it seems like the requests for ever more comprehensive and flexible web infrastructures to meet a wider variety of learning and teaching needs keeps coming.

The Noteable service is alive, up and running, ready for action. Okay so actually it was technically alive last year as people just couldn’t wait that long but now it’s official. In brief, the Noteable service is a cloud-based service providing Jupyter notebooks. If you haven’t come across computational notebooks before then I’ve written a brief explainer Here. We are now into the pilot phase of this project, looking into the benefits (and cost) of providing a centrally supported cloud-based notebook service. Whilst Notebooks have implications for many different areas, we are specifically interested in the benefits within learning and teaching. (Noteable at Edinburgh)

This is a pilot project that my team are running – due to kick off in earnest with our carefully selected cohorts of willing volunteers from September 2018. This project came out of the research computing space in our institution – a number of Schools were running their own JupyterHub services for learning and teaching and when we went to speak to them about a central service to support research they told us this is what they needed more. Yes, it’s been LTI-ed and hooked into the VLE as well.

The kinds of assessment activities we need to facilitate are changing

CodeRunner is a free open-source question-type plug-in for Moodle that can run program code submitted by students in answer to a wide range of programming questions in many different languages.

Anyone with a Learn course can link through to CodeRunner via the “UOE STACK” tool in your Learn course. Chris Sangwin in the School of Mathematics set this up and they are very pleased with it for Python and MATLAB. They also have plans for C++, R and Maxima in the future, and other languages may be supported on request. (Informatics Learning Technology Support at University of Edinburgh)

Coderunner is being run by our School of Maths and offered to other Schools to use, but it’s linked into the central VLE – Blackboard Learn – via an LTI connection that my team developed and support. Incidentally, yes, we are using a ripped back version of Moodle, plugged into BB Learn via LTI as an assessment platform. Apparently an NGDLE can be just anything you want it to be, including 2 VLEs bootstrapped together.

As part of our Jupyter pilot we’ve also already had to tackle how to grade notebooks and are working with our School of Informatics who have been trialing nbgrader.

The kinds of questions learning technologists are being asked are changing

This thread is worth following to see a whole bunch of people pile in and pick through the complexity involved in supporting this kind of activity. A useful pause for reflection about the potential breadth and diversity that digital skills courses for students may need to cover in the not too distant future.

Trends in reports are…trending…

Rise of New Forms of Interdisciplinary Studies
According to the Melbourne Sustainable Society Institute, multidisciplinary research refers to concurrent exploration and activities in seemingly disparate fields. Digital humanities and computational social-science research approaches are opening up pioneering areas of multidisciplinary research at libraries and innovative forms of scholarship and publication. Researchers, along with academic technologists and developers, are breaking new ground with data structures, visualization, geospatial applications, and innovative uses of open-source tools. At the same time, they are pioneering new forms of scholarly publication that combine traditional static print-style scholarship with dynamic and interactive tools, enabling real-time manipulation of research data. (EduCause Horizon Report Preview 2018)

Big-data inquiry: thinking with data:
New forms of data, data visualisation and human interaction with data are changing radically and rapidly. As a result, what it means to be ‘data literate’ is also changing. In the big data era, people should no longer be passive recipients of data-based reports. They need to become active data explorers who can plan for, acquire, manage, analyse, and infer from data. The goal is to use data to describe the world and answer puzzling questions with the help of data analysis tools and visualisations. Understanding big data and its powers and limitations is important to active citizenship and to the prosperity of democratic societies. Today’s students therefore need to learn to work and think with data from an early age, so they are prepared for the data-driven society in which they live. (Innovating Pedagogy 2017)

Navigating post-truth societies:
Post-truth was 2016’s Word of the Year, according to Oxford Dictionaries. Fake news and information bubbles are not new but awareness of their impact on public opinion has increased. People need to be able to evaluate and share information responsibly. One response is to integrate these skills within the curriculum. However, this raises questions: How can we know which sources to trust? The ways in which people think about such questions are called ‘epistemic cognition’. Researchers have developed ways of promoting learners’ epistemic cognition. These include promoting understanding of the nature of knowledge and justification as well as fostering abilities to assess the validity of claims and form sound arguments. (Innovating Pedagogy 2017)

Again, data, data-handling, ethics, code. Disciplinary boundaries are thinning. I’m mounting no solid defence of the accuracy of the Horizon Report series here, but I recognise their mid-term trend, and the two education practices identified in the OU Innovating Pedagogy report when I look at the ambitions for new centres like the Edinburgh Futures Institute and the development of new academic courses and programmes.

Edit: Spotted this tweet today (26 June 2018)


Beyond some of the obvious skills challenges, some further questions spring to mind when thinking about all of this:

  • How will we handle open licensing and sharing of many kinds of these new digital artefacts? What do we do when a computational notebook can be shared openly, but the dataset used with it can’t? How do we share AI or VR artefacts?
  • How do we build the broad internal coalitions that are required to deal with the implications of learning analytics, educational analytics and “data-driven decision making” projects. We have the expertise and knowledge about what data from learning technologies means (or indeed doesn’t).
  • How long before our OER advisors are regularly handling questions about open licensing beyond Creative Commons? (GPL, Apache, BSD, MIT etc) We need to be able to simply and easily explain a more diverse and complex licensing and sharing landscape.
  • How long before we’re giving every student an account in an institutional GitLab to manage their own code, to go along with their Domain of One’s Own sites?

As if by magic, in the time it has taken to draft this post…

Mind the Ethics Gap

This is something I’ve been banging on about for a while in my own institution, and thankfully other people smarter and more influential than me have also been thinking about it. We’re really good at considering ethics within our research activities. We’ve got committees and forms and processes up the ying-yang. We have a total absence of this kind of thing in our operational activities though. There’s been an assumption for many years that administrative IT and learning tech is relatively benign and “business need” is a good enough justification. As new secondary uses of data from our core platforms have been identified we’ve found ourselves caught short. We are making decisions about operational activities, that have far-reaching ethical and behavioural consequences, without an adequate framework in which to consider these concerns. Thankfully GDPR and Privacy Impact Assessments will put a bit of a brake on things in European institutions, but it will be a momentary drawing of breath. We need to sort this out.

At this stage I’m going to give another plug to the Near Future Teaching project, led by Professor Sian Bayne. This is our institutional response to closing out some of these gaps, with the explicit aim of putting our institutional values front and centre, and actively directing and making our future, not just predicting it.

Okay, now I’m struggling to bring this rambling nonsense to a close. I don’t have a big finish or a final reveal. I worry that it sounds like I’m peddling the worst kind of “disruption speak” here too. I’m certainly not advocating for everyone of us suddenly learning to code, or develop mad API skills (Sheila McNeill has a lovely post on just this topic). I just think this is stuff that’s coming at us, and there are risks if we deal with it retroactively. Ta da! Damp squib ending.

* This was a holiday experiment – trying digital books on an actual eReader. In the end I loaded it up with lots of Verso books that I bought DRM-free, and various papers I wanted to read. All of which had to be emailed to an email address. So I bought an Amazon device, paid an extra premium to remove ads, then spent my own time emailing them stuff to put on it. All so that they can better target me for advertising and book recommendations. F*ckers. How could I be so dumb?

** There’s an emissions scandal metaphor in here for the taking

(By Кампо Вейерман ( [Public domain], via Wikimedia Commons)

3 thoughts on “Breaking your back in the new low

Leave a Reply

Your email address will not be published. Required fields are marked *