This post, or something like it, has been rattling around in my head and in my Drafts folder for the better part of 18 months now. Since I’ve been doing an amount of work around chatbots and conversational interfaces, and more is coming, I want to lay out some thoughts about a chatbot that I encountered at MozFest 2017 and which still makes me think. Let me see if I can explain…
It starts with the Nefertiti Hack (which I heard about at MozFest 2016). In 2016 2 German artists, Nora Al-Badri and Jan Nikolai Nelles, claimed to have clandestinely scanned the bust of Nefertiti in the Neues Museum in Berlin and then released the 3D model online under a CC license. I say “claim” because there’s been an amount of controversy over whether they scanned the bust with a hacked Kinect, or whether they got hold of the 3D scan that the museum had made. However it was done though, the key point of the Hack was to highlight issues of looting and cultural appropriation.
“With the data leak as a part of this counter narrative we want to activate the artefact, to inspire a critical re-assessment of today’s conditions and to overcome the colonial notion of possession in Germany.” (Nefertiti Hack)
They 3D-printed a very high quality replica from the model which was then exhibited in Cairo:
“The object was not a strict copy as a perfectly painted replica, which only mimics the original, but as cultural storage, which does not try to conceal its origin as a technological reproduction but embraces the value of the inherent information.” (The Other Nefertiti)
In 2017, the same artists created Nefertiti Bot as one of the ZKM Web Residencies, drawing inspiration from the Nefertiti Hack. The bot gives voice to the artefact, but importantly not the artefact in the museum; you interrogate the avatar of the 3D scanned Nefertiti.
“the artists are asking unsettling questions about the state of humanity and discuss the agency of inanimate things and post humanism, challenging the way of seeing the world human-centric.“ (ZKM Web Residences)
So far, so long-winded. If you have a play with Nefertiti Bot you will very quickly discover that she’s got quite a limited repertoire (and I think she may be undergoing changes at the moment as she’s got less to say than she used to. She used to tell a joke for example).
So what do I find useful about her?
- Nefertiti Bot is expressly not human. This goes a long way to avoiding any “uncanny valley” effects.
- It’s built with a post-humanistic sensibility. It helped me to better understand that chatbots don’t have to be built to be helpful. They can be provocative. In that sense it’s not subject or subordinate to those that interact with it. How very regal.
- It’s got a very clear agenda that it wants to direct any conversation towards. It’s rhetorical moves are a bit clunky and you have to work a bit to get everything it wants to say out. That process of working against it could be instructive, and whether you want to have it’s conversation or not, it will make you think.
- It was conceived to be installed in museum contexts alongside the bust. In doing so it assumes and subverts the role of the curator: “The bot has a Persona and works as a prototype for museums as a new mediator tool and interface, complementing or replacing the curator and written display texts.” (Nefertitibot)
Much of what I have noted above is also well covered in my colleague Sian Bayne’s paper on Teacherbot – which documents an earlier chatbot built in 2014 for the E-Learning and Digital Cultures MOOC on Coursera.  Teacherbot was built explicitly to explore ideas about teacher automation from a post-human perspective. By interrogating a slightly clunky, definitely not-human, playful and provocative bot, students had the opportunity to experience and reflect on what the possibilities in this space might be:
“…as a piece of experimental boundary work, it functioned well: teacherbot responses worked playfully and with immediacy across the social exchanges on Twitter, prompting some often quite profound reflection on course concepts, as well as generative misunderstandings. There was plenty of active ‘prodding’ of ‘botty’ by students to unveil the limits to its proxy ‘humanity’.” (Teacherbot: interventions in automated teaching)
What I particularly like about Nefertiti Bot is that it both validates the experience of Teacherbot and moves the conversation on a little. Whilst it also invites a critique of itself, the questions posed are different: Why does a bot need to exist to provide a counter-narrative to museum artefacts? What stories aren’t being told in main-stream curatorial practices? Why does a 3D scanned head of Nefertiti exist? How did the original one find it’s way into a museum?
It also invites critique of a physical / digital artefact which I find fascinating. The possibilities for bots to be embedded into a range of contexts where they could act as provocative agents alongside engagement with another artefact, process, or activity has quite a bit of scope. 
What I think Nefertiti Bot and Teacherbot have helped me understand best though, is that whilst more intelligent AI-powered bots will come , there’s an awful lot that could be possible right now with well-designed but not-very-intelligent bots. Provocation, playfulness and well aligned to a task students care about would seem to be key – something that creates just enough generative friction without tipping over into frustration. 
 Many fun things happened on that MOOC, not least the time I stitched 900 blogs together with sellotape, Google Spreadsheets and Yahoo Pipes. I digress…  What I’ve seen in operational education contexts largely seems to be limited to helper bots however – virtual assistant / teaching assistant / customer service type applications of greater or lesser complexity (various admissions chat bots; Deakin Genie, Jill Watson) or to supporting conversation and connection between students (Differ). There are some more interesting examples in the research space, for example work on using discourse analysis and bots to guide conversations in forums (Caroline Rose).  There’s an excellent book chapter by George Veletsianos and Gregory S. Russell that gives a good summary of the literature from 2005 – 2011 on pedagogical agents and the various claims made for them.  Also worth a read is this study by Veletsianos and Miller looking at what it means to have a conversation with a pedagogical agent. Some really interesting ideas about the extent to which students push the boundaries of the bot, are sucked into the conversation and are engaged.
(Philip Pikart [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)])