Why we need learning technology developers

Reading Time: 7 minutes

I’ve written this piece to revisit some older thinking, but also to state my position as part of my contribution to a presentation at the APT conference on 01 July at UCL. My colleagues Jenny Scoles and Timothy Drysdale will be there to present. Since I won’t be, the best I can do is give them something to point to if someone asks an awkward question. The presentation has quite a wide scope and I’m going to dig into one area of it here; namely why we need to develop some of our own learning technologies, and therefore what kinds of learning technologists we might need for the future.

I wrote a post almost a year ago  about learning technologists and some of the new challenges that we face. I’m returning to it in the light of recent conversations and papers about non-traditional practical work in a science education context:

“The issues around learning technology that we now need to be alert to require us to have a higher level of understanding about platforms, systems, code, data, and ethics. As learning technologists we listen to and learn from our academic colleagues about how they are using technology in their practice; and bring our specialist knowledge to advise on the use of various technologies in specific pedagogical contexts. We need to move beyond just thinking about learning technologies as tools (or efficiency mechanisms) and consider the generative ways in which technologies might act as co-agents alongside teachers.”

I was arguing that I saw early signs and portents that the kinds of things a learning technologist might need to engage with was changing. I was tentatively arguing for a more post-humanist frame for thinking about learning technology. Specifically I thought that we would need more learning technologists with data handling and programming skills to meet our new challenges.

Learning technology ecosystems

Anyone who knows me knows that I have no real interest in talking about VLEs. I think they’re necessary and I think we need to do better in terms of the design and usability of courses within them, but I believe that most main stream VLEs are best thought of as commodity infrastructure, and that they have also become the epitome of the “platform university” (see recent announcements by Instructure for instance).

I also see significant dangers in rushing to outsource all our educational technology. At the very least we risk being hamstrung by what the market provides and restricted to a genericised view of learning technology delivered via a selection of black boxes. At worst I think we dumb down the potential for technology in education and don’t adequately prepare our students for the world around them.

I’m not arguing for a purist approach here; rather that we are thoughtful and intentional about our learning technology choices, recognising that the market cannot always provide, and retaining some capability to innovate as a sector for our specific needs. I’m not alone in thinking this. The recent Near Future Teaching project (“co-designing a values based vision for digital education”) run at Edinburgh by Prof Sian Bayne reached similar conclusions:

“Provide teaching staff and students with central access to programmers and developers for joint prototyping and trialling of new ways of doing digital education. Support associated pedagogic research through the Principal’s Teaching Award Scheme and other channels.” (Vision and Aims – Playful and Experimental: Near Future Teaching Final Report)

With that in mind then, we should be embracing as normal the notion of a wider ecosystem of learning technologies; a constellation of digital things, more or less loosely joined. Some might call that an NGDLE. I might not (or I might, but in another blog post I haven’t finished yet).

One recent example of where the market doesn’t provide, and where we need some capability to innovate for our specific needs is the subject of our APT presentation, which talks about the development of simulation activities for students as part of an engineering design project.

“A web-app was built that simulated a solar panel installed on a hill, with accurate sun movements. To develop student digital literacies as part of the activity, a facility for programmatic control of the solar panel orientation was included. There were threshold concepts (Meyer and Land, 2003) that needed to be understood, such as subtracting the signal from one sensor from another in order to work out the error in the orientation of the solar panel.”

This is a good example of developing a piece of learning technology that’s tightly bound to the course context and desired pedagogical outcomes. It’s also a good example of the kind of thing that’s only currently possible if academic colleagues have programming skills because there is no commercial supplier for this kind of learning technology and in many cases no local resource available to help develop it.

My colleagues in Engineering would argue that in order to do more of this kind of thing in the future something analogous to the current technician role is required. There’s a long history of technicians fabricating physical things for in-person lab work, and so it’s not a huge conceptual leap to assume we might now need people to fabricate digital things for online virtual-lab work. I agree entirely, and I’d extend that thinking beyond STEM education as the Near Future Teaching vision advocates.

Since that blog post in June last year I’ve been talking with a number of colleagues about an expanded view of our learning technology ecosystem, and the attendant expansion of role of the learning technologist that goes along with it. If we accept that the digital pervades education (it does) then I think there are very real and significant risks to institutions in not investing in and embracing this wider conception of our learning technology ecosystem, and the required expansion in the role of the learning technologist. Again, last year I said:

“It also needs to be possible for as many of our academic colleagues and our students as possible to engage meaningfully and critically with the increasing digitisation of education, and bring as diverse a set of perspectives as possible. Otherwise the risk is that their voices will be marginalised, and other interests and biases will come to the fore. Learning technologists have a vital role to play in this space I believe, but it requires institutions to invest in time, staff development and digital skills for all of us.”

I’ve argued in these conversations for us to have capacity to develop learning technologies for ourselves and where possible to use or release as open source.* This has converged with some recent rants on the blog including the need for the sector to do more for itself generally, and some thoughts about sustainability for non-traditional practical work specifically. I think it is vitally important that we retain the capacity to develop and implement for ourselves as a sector, as commercial educational technology will by its nature tend towards market share and monopoly (and therefore inherent genercisation). We need to ensure that the voices and requirements of our diverse academic communities are reflected in our learning technologies, and that we drive innovations and shape our own future, not merely react to one handed to us. A few colleagues have made the argument for this more eloquently than I:

In her recent University World News article Laura Czerniwicz wrote:

“Emergent opportunities in the digital era are just that – emergent. Despite the confidence of futurists, no one knows for sure how the confluence of current trends will play out.

But in this setting, the fall of the innovation dice is increasingly weighted to serve vested economic interests. It is therefore essential that universities enable and enlarge innovation spaces to expand the possible, and to do so with an agenda that serves social needs.” ()

And in a recent paper Jeremy Knox also writes:

“Indeed, there is good reason to think that utopic visions of technology for education could be reclaimed where public, collective ideals are built-in to the digital systems used for educational activity, instead of allowing private enterprise to increasingly encroach on teaching and learning practices through corporate platform models.”

Finally Ben Williamson highlights the real dangers that he sees in the wholesale outsourcing of learning technology:

At a time of budgetary austerity and myriad other pressures on HE, the platform university is a perversely pricy technical solution to complex structural, institutional, and political problems. Significant HE spending is now flowing from universities to platform providers, along with data they can use to their own advantage as market actors in an emerging sub-sector of platform capitalism. Unless universities act collectively in the sector’s own interests, they may find themselves positioned as educational product providers and data collection partners for the new HE platform industry.

Challenges and possible solutions

So far, so utopian. I work in an institution that is well resourced, but even here we are tending to favour outsourcing more often. I’m not going to shy away from the fact that to have this kind of learning technology development capacity needs money and resources. Fundamentally it needs us to see some kinds of learning technologies as more than commodity IT infrastructure, and to invest in them accordingly. IT departments with budgets under pressure and student expectations about services levels to meet find this deeply uncomfortable territory. I get that. It’s still necessary. We can manage this though, and IT people have some of the skills and thinking we need in here. For example:

  • Can we strike some balance around Infrastructure as a Service type options? We could at least aim to standardise the infrastructure and development tools that we use.
  • Are there common development frameworks or components that we could adopt? For example our APT example used the Blockly library from Google.
  • We can try to standardise around delivery frameworks. A neat example might be the Tsugi project which provides an LTI enabled “app store” of tools that can be very easily plugged into the VLE. By adopting common standards like LTI and frameworks like Tsugi we can have visibility of the things we develop, be clear about what data is going where, and get maximum use out of them across out institutions.
  • In a sector of diverse capacity and resource, we also need to think about solutions at a sector level. Where can we share? How can we support local remix to ensure that one institutions learning technology doesn’t become another’s “black box”.
  • Can we engage students in co-creation of some of these technologies? Combined with sharing at sector level and the potential for remixing / refactoring locally this offers a myriad of rich pedagogical opportunities.

Finally, I’m going to end with something I wrote for Educational Technology magazine on the use of cloud services in education, because I think it sums up the core of the argument about retaining capacity and outsourcing:

Q. What risks and challenges should education institutions be aware of when adopting cloud technology?

I think the main challenge to be managed when adopting more cloud technology is to ensure that a move to outsourcing doesn’t end up depleting your own skill-base. Cloud can help take the strain when it comes to commodity IT, and it can also open up access to computing resources that would otherwise be impossible to run locally (AI, machine learning, chatbots, etc).

The trick is to look beyond the efficiency and cost savings that come with doing the first, and realise the innovation potential that comes with the second. Not achieving this will mean, in the long term, a reduction in our capacity for innovation, and probably exposing ourselves to significant cost risks through increased external dependencies.

Cloud technologies are being used to drive forward change processes in all our institutions, but the other vital thing to remember is that change is fundamentally a ‘people’ process. Turning to the cloud will allow us to negate technical barriers and problems, and open up new technologies beyond our current capabilities – but ultimately it won’t, in and of itself, deliver change. Continuing to attend to the cultural impact of change, and ensuring that people are well supported through investment in areas like digital skills development; these processes have to go hand-in-hand with the innovation.


The platform university: a new data-driven business model for profiting from HE | Wonkhe | Analysis. (n.d.). Retrieved June 13, 2019, from https://wonkhe.com/blogs/the-platform-university-a-new-data-driven-business-model-for-profiting-from-he/
Czerniwicz, L. (2019). Innovation for the public good in a deeply unequal society. Retrieved June 13, 2019, from https://www.universityworldnews.com/post.php?story=20190516104646131
Knox, J. (2019). What Does the ‘Postdigital’ Mean for Education? Three Critical Perspectives on the Digital, with Implications for Educational Research and Practice. Postdigital Science and Education. https://doi.org/10.1007/s42438-019-00045-y

* We have tried to model this through activities like launching a University blogging service based on WordPress, releasing our work on LTI and WordPress as open source, and hosting a Jupyter.org community hackathon to improve the nbgrader plugin for Jupyterhub.

3 thoughts on “Why we need learning technology developers

  1. One of the blockers for me in my institution is that whilst I can hack on things, and even make them reproducible and runnable on public servers (eg via things like https://blog.jupyter.org/the-international-binder-federation-4f6235c1537e ) or my own servers (at my $, and at my opportunity cost having to work out how to deploy, configure, build and weakly secure servers (which aren’t my skill set and aren’t really useful things for me to learn given all the other things I could be doing), there’s no easy way for me to access internally shareable general purpose compute.

    IT HATE the idea of folk running arbitrary applications, let alone running arbitrary code and installing arbitrary packages. Virtualised approaches get round some of the security concerns, although there are also network security issues (eg folk mounting attacks from institutional servers). But lots of other orgs seem to cope…

    I also have nowhere to share tools I build around internal services that help make my life easier (eg something to automatically download and unzip 100 third marking assessment scripts, rather than have to manually download and unzip each one /one at a time/. But then, because I’ve written scripts that accept a user ID and pwd and then scrape the existing OU systems, I’m probbaly breaching all manner if IT/Computing policy conditions. (Not having any way of sharing such solutions, even at a time when the org is spending loads of 0s on an institutional “core systems replacement” project, also seems like a net loss to the org (to me) of possibly useful info; plus my hacked solution cost maybe two 0s of my time (at most three, preceded by a small, single integer, which, erm, whatever…).

  2. I always wonder about whether UK elearning had its chance to do this ages ago and it never wanted to. Now, we’re all in competition, what good is collaboration to us?

  3. Good provocation Anne-Marie. To me, this looks yet another example of universities’ inability to translate the expertise in their own institution into organisational behaviour. Typically, this plays out in the academic/professional staff split: there may be world class thinking on X in a faculty, while round the corner a business unit is struggling away with steam age thinking and tools, either blissfully unaware of how dated their work is, or acutely aware, but not able to interest the academics (woh when asked to help their professional colleagues, see no incentives).

    Tim McKay and I were comparing notes on how Michigan and UTS are tackling this version of the problem for learning analytics and ed-tech. The strategies we’re taking are pretty obvious in retrospect: invest in building capacity to innovate in response to institutional challenges AND translate those innovations into production grade services. Michigan created a Digital Innovation Greenhouse, and UTS created the Connected Intelligence Centre (http://simon.buckinghamshum.net/2018/03/architecting-organisationally-for-learning-analytics).

    These have by no means solved the overall problem you’re wrestling with, e.g. at CIC we can deploy student-facing apps, but not (yet) broader data ecologies of the sort Kirsty Kitto’s working on (https://www.beyondlms.org/blog/LASIworkshop/). But I think they’re positive signals of what’s possible when universities have the right people at senior level (and make no mistake, we still have a lot of work to do with our IT colleagues as we pitch open source ‘products’ to them :-).

    Now, I’m musing on how well this translates to your dilemma? Our case was that researchers lack the incentive structures, skillsets and resources to translate their inventions into enterprise grade services. If we swap “researchers” for “learning technologists”, what happens?…

    1. Learning Technologists are already professional staff in service units, some of whom are also academics. You already HAVE the incentive and mission to impact the university’s teaching and learning (unlike most educational/ed-tech researchers in faculties).

    2. But then there’s a difference. Academics can land a grant and invent a promising prototype. You’re saying learning technologists may not even have that developer capacity to rapidly prototoype, because students only ever get to see ‘proper products’ backed by the comforting things that IT know and love: contracts, SLAs, etc. (so academics, learning technologists and faculty should be collaborating more with e.g. internal catalyst grants to demonstrate new concepts)

    3. But even if learning technologists do invent a quick prototype to solve a specific challenge, like your example, IT will (rightly) ask, who is going to maintain that for the next cohort of students, after XYZ upgrades and breaks everything, etc? At this point, you’re in the same boat as the academic whose grant has ended, and has no capacity to service all the interest from their colleagues in the cool things they’ve been doing.

    Dynamic open source projects across institutions may be an answer, but we know those don’t just happen, and come with their own risks. Right now I am wrestling with how we pitch open source platforms we have either developed ourselves (e.g. https://uts.edu.au/acawriter), or from joint projects (e.g. https://www.ontasklearning.org), for mainstreaming in UTS, when there is no ‘product’ to go and buy. Hopefully I’ll have something to share when there’s been progress.

Leave a Reply

Your email address will not be published. Required fields are marked *